Skip to main content Skip to main navigation


Guest editorial: Special issue on robot learning, Part B

Jan Peters; Andrew Y. Ng
In: Autonomous Robots, Vol. 27, No. 2, Pages 91-92, Springer, 2009.


Creating autonomous robots that can assist humans in unpredictable situations of daily life has been a long standing vision of robotics, artificial intelligence, and the cognitive sciences. With the current rise of physical humanoid and other highly mechanically capable robots in robotics research labs around the globe, we have come a step closer to this aim. Thus, it has become essential to create robot systems that learn to accomplish a multitude of different tasks, triggered by environmental context or higher level instruction. Only if machine learning succeeds at making robots fully adaptive, is it likely that we will be able to take real robots out of research labs into human inhabited environments. To do so, future robots will need to make proper use of perceptual stimuli such as vision, proprioceptive and tactile feedback, and translate these into motor commands. In order to close this complex loop from perception to action, machine learning will be needed in various stages such as sensor-based action determination, high-level plan generation and torque-level motor control. Among the important problems hidden in these steps are perceptuo-action coupling, imitation learning, movement decomposition, probabilistic planning, motor primitive learning, reinforcement learning, model learning and motor control.

Weitere Links