Learning motor primitives for robotics

@article{Kober2009LearningMP,
  title={Learning motor primitives for robotics},
  author={J. Kober and Jan Peters},
  journal={2009 IEEE International Conference on Robotics and Automation},
  year={2009},
  pages={2112-2118}
}
  • J. Kober, Jan Peters
  • Published 2009
  • Computer Science
  • 2009 IEEE International Conference on Robotics and Automation
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Motor primitives offer one of the most promising frameworks for the application of machine learning techniques in this context. Employing an improved form of the dynamic systems motor primitives originally introduced by Ijspeert et al. [2], we show how both discrete and rhythmic tasks can be learned using a concerted approach of both imitation and reinforcement learning. For doing so, we… Expand
Learning motor skills: from algorithms to robot experiments
  • J. Kober
  • Computer Science
  • it Inf. Technol.
  • 2012
TLDR
It is shown how motor primitives can be employed to learn motor skills on three different levels, which contributes to the state of the art in reinforcement learning applied to robotics both in terms of novel algorithms and applications. Expand
Policy Search for Motor Primitives in Robotics
TLDR
This paper extends previous work on policy learning from the immediate reward case to episodic reinforcement learning, resulting in a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning that is particularly well-suited for dynamic motor primitives. Expand
Towards Motor Skill Learning for Robotics
TLDR
This paper proposes to break the generic skill learning problem into parts that the authors can understand well from a robotics point of view, and designs appropriate learning approaches for these basic components, which will serve as the ingredients of a general approach to motor skill learning. Expand
Learning Motor Skills - From Algorithms to Robot Experiments
TLDR
This book illustrates a method that learns to generalize parameterized motor plans which is obtained by imitation or reinforcement learning, by adapting a small set of global parameters and appropriate kernel-based reinforcement learning algorithms. Expand
Policy search for motor primitives in robotics
TLDR
A novel EM-inspired algorithm for policy learning that is particularly well-suited for dynamical system motor primitives is introduced and applied in the context of motor learning and can learn a complex Ball-in-a-Cup task on a real Barrett WAM™ robot arm. Expand
Robot Skill Acquisition by Demonstration and Explorative Learning
The chapter presents a survey of methods used at Humanoid and Cognitive Robotics Lab for the robot skill acquisition and self-improvement of the learned skill. Initial demonstration are parameterizedExpand
Robot motor skill coordination with EM-based Reinforcement Learning
TLDR
An approach allowing a robot to acquire new motor skills by learning the couplings across motor control variables through Expectation-Maximization based Reinforcement Learning is presented. Expand
Robot Skill Representation, Learning and Control withProbabilistic Movement Primitives
TLDR
A novel movement primitive representation that not only models the shape of the movement but also its uncertainty in time is introduced, creating a mathematically sound framework that is capable of adapting skills to environmental changes as well as adapting the execution speed online. Expand
Skill learning and task outcome prediction for manipulation
TLDR
This work presents a Reinforcement Learning based approach to acquiring new motor skills from demonstration that allows the robot to learn fine manipulation skills and significantly improve its success rate and skill level starting from a possibly coarse demonstration. Expand
Visual Imitation Learning for Robot Manipulation
Imitation learning has been successfully applied to solve a variety of tasks in complex domains where an explicit reward function is not available. However, most imitation learning methods requireExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 31 REFERENCES
Policy Search for Motor Primitives in Robotics
TLDR
This paper extends previous work on policy learning from the immediate reward case to episodic reinforcement learning, resulting in a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning that is particularly well-suited for dynamic motor primitives. Expand
Learning perceptual coupling for motor primitives
TLDR
An augmented version of the dynamic system-based motor primitives which incorporates perceptual coupling to an external variable is proposed which can perform complex tasks such a Ball-in-a-Cup or Kendama task even with large variances in the initial conditions where a skilled human player would be challenged. Expand
Policy Gradient Methods for Robotics
  • Jan Peters, S. Schaal
  • Engineering, Computer Science
  • 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2006
TLDR
An overview on learning with policy gradient methods for robotics with a strong focus on recent advances in the field is given and how the most recently developed methods can significantly improve learning performance is shown. Expand
Learning from demonstration: repetitive movements for autonomous service robotics
This paper presents a method for learning and generating rhythmic movement patterns based on a simple central oscillator. It can be used to generate cyclic movements for a robot system which has toExpand
Reinforcement learning for imitating constrained reaching movements
TLDR
A system to teach the robot constrained reaching tasks is described based on a dynamic system generator modulated by a learned speed trajectory combined with a reinforcement learning module to allow the robot to adapt the trajectory when facing a new situation, e.g., in the presence of obstacles. Expand
Learning from demonstration and adaptation of biped locomotion
TLDR
Dynamical movement primitives are suggested as a CPG of a biped robot, an approach previously proposed for learning and encoding complex human movements, to achieve natural human-like locomotion. Expand
Learning Attractor Landscapes for Learning Motor Primitives
TLDR
By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be generated without losing the stability properties of the canonical system. Expand
Using Bayesian Dynamical Systems for Motion Template Libraries
TLDR
This paper shows how human trajectories captured as multi-dimensional time-series can be clustered using Bayesian mixtures of linear Gaussian state-space models based on the similarity of their dynamics and introduces a novel approximation method based on variational Bayes, especially designed to enable the use of efficient inference algorithms. Expand
Control, Planning, Learning, and Imitation with Dynamic Movement Primitives
TLDR
A comprehensive framework for motor control with movementPrimitives using a recently developed theory of dynamic movement primitives (DMP), whose time evolution creates smooth kinematic movement plans that can be incorporated into the time evolution of the differential equations. Expand
A Kendama Learning Robot Based on Bi-directional Theory
A general theory of movement-pattern perception based on bi-directional theory for sensory-motor integration can be used for motion capture and learning by watching in robotics. We demonstrate ourExpand
...
1
2
3
4
...