000 | 03417nam a22005055i 4500 | ||
---|---|---|---|
001 | 978-3-319-01168-4 | ||
003 | DE-He213 | ||
005 | 20200420220227.0 | ||
007 | cr nn 008mamaa | ||
008 | 130623s2013 gw | s |||| 0|eng d | ||
020 |
_a9783319011684 _9978-3-319-01168-4 |
||
024 | 7 |
_a10.1007/978-3-319-01168-4 _2doi |
|
050 | 4 | _aQ342 | |
072 | 7 |
_aUYQ _2bicssc |
|
072 | 7 |
_aCOM004000 _2bisacsh |
|
082 | 0 | 4 |
_a006.3 _223 |
100 | 1 |
_aHester, Todd. _eauthor. |
|
245 | 1 | 0 |
_aTEXPLORE: Temporal Difference Reinforcement Learning for Robots and Time-Constrained Domains _h[electronic resource] / _cby Todd Hester. |
264 | 1 |
_aHeidelberg : _bSpringer International Publishing : _bImprint: Springer, _c2013. |
|
300 |
_aXIV, 165 p. 55 illus. in color. _bonline resource. |
||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
347 |
_atext file _bPDF _2rda |
||
490 | 1 |
_aStudies in Computational Intelligence, _x1860-949X ; _v503 |
|
505 | 0 | _aIntroduction -- Background and Problem Specification -- Real Time Architecture -- The TEXPLORE Algorithm -- Empirical Evaluation -- Further Examination of Exploration -- Related Work -- Discussion and Conclusion -- TEXPLORE Pseudo-Code. | |
520 | _aThis book presents and develops new reinforcement learning methods that enable fast and robust learning on robots in real-time. Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that they are mainly limited to tasks where it is possible to hand-program behaviors for every situation that may be encountered. For robots to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes and could solve the problems of learning and adaptation on robots. This book identifies four key challenges that must be addressed for an RL algorithm to be practical for robotic control tasks. These RL for Robotics Challenges are: 1) it must learn in very few samples; 2) it must learn in domains with continuous state features; 3) it must handle sensor and/or actuator delays; and 4) it should continually select actions in real time. This book focuses on addressing all four of these challenges. In particular, this book is focused on time-constrained domains where the first challenge is critically important. In these domains, the agent's lifetime is not long enough for it to explore the domains thoroughly, and it must learn in very few samples. | ||
650 | 0 | _aEngineering. | |
650 | 0 | _aImage processing. | |
650 | 0 | _aComputational intelligence. | |
650 | 0 | _aRobotics. | |
650 | 0 | _aAutomation. | |
650 | 1 | 4 | _aEngineering. |
650 | 2 | 4 | _aComputational Intelligence. |
650 | 2 | 4 | _aImage Processing and Computer Vision. |
650 | 2 | 4 | _aRobotics and Automation. |
710 | 2 | _aSpringerLink (Online service) | |
773 | 0 | _tSpringer eBooks | |
776 | 0 | 8 |
_iPrinted edition: _z9783319011677 |
830 | 0 |
_aStudies in Computational Intelligence, _x1860-949X ; _v503 |
|
856 | 4 | 0 | _uhttp://dx.doi.org/10.1007/978-3-319-01168-4 |
912 | _aZDB-2-ENG | ||
942 | _cEBK | ||
999 |
_c52280 _d52280 |