A system for learning continuous human-robot interactions from human-human demonstrations

Abstract

We present a data-driven imitation learning system for learning human-robot interactions from human-human demonstrations. During training, the movements of two interaction partners are recorded through motion capture and an interaction model is learned. At runtime, the interaction model is used to continuously adapt the robot's motion, both spatially and temporally, to the movements of the human interaction partner. We show the effectiveness of the approach on complex, sequential tasks by presenting two applications involving collaborative human-robot assembly. Experiments with varied object hand-over positions and task execution speeds confirm the capabilities for spatio-temporal adaption of the demonstrated behavior to the current situation.

DOI: 10.1109/ICRA.2017.7989334

10 Figures and Tables

Cite this paper

@article{Vogt2017ASF, title={A system for learning continuous human-robot interactions from human-human demonstrations}, author={David Vogt and Simon Stepputtis and Steve Grehl and Bernhard Jung and Heni Ben Amor}, journal={2017 IEEE International Conference on Robotics and Automation (ICRA)}, year={2017}, pages={2882-2889} }