Learning feedback terms for reactive planning and control


With the advancement of robotics, machine learning, and machine perception, increasingly more robots will enter human environments to assist with daily tasks. However, dynamically-changing human environments requires reactive motion plans. Reactivity can be accomplished through re-planning, e.g. model-predictive control, or through a reactive feedback policy that modifies on-going behavior in response to sensory events. In this paper, we investigate how to use machine learning to add reactivity to a previously learned nominal skilled behavior. We approach this by learning a reactive modification term for movement plans represented by nonlinear differential equations. In particular, we use dynamic movement primitives (DMPs) to represent a skill and a neural network to learn a reactive policy from human demonstrations. We use the well explored domain of obstacle avoidance for robot manipulation as a test bed. Our approach demonstrates how a neural network can be combined with physical insights to ensure robust behavior across different obstacle settings and movement durations. Evaluations on an anthropomorphic robotic system demonstrate the effectiveness of our work.

DOI: 10.1109/ICRA.2017.7989252

10 Figures and Tables

Cite this paper

@article{Rai2017LearningFT, title={Learning feedback terms for reactive planning and control}, author={Akshara Rai and Giovanni Sutanto and Franziska Meier and Stefan Schaal}, journal={2017 IEEE International Conference on Robotics and Automation (ICRA)}, year={2017}, pages={2184-2191} }