Kinesthetic Bootstrapping: Teaching Motor Skills to Humanoid Robots through Physical Interaction

@inproceedings{Amor2009KinestheticBT,
  title={Kinesthetic Bootstrapping: Teaching Motor Skills to Humanoid Robots through Physical Interaction},
  author={Heni Ben Amor and Erik Berger and David Vogt and Bernhard Jung},
  booktitle={KI},
  year={2009}
}
Programming of complex motor skills for humanoid robots can be a time intensive task, particularly within conventional textual or GUI-driven programming paradigms. Addressing this drawback, we propose a new programming-by-demonstration method called Kinesthetic Bootstrapping for teaching motor skills to humanoid robots by means of intuitive physical interactions. Here, "programming" simply consists of manually moving the robot's joints so as to demonstrate the skill in mind. The bootstrapping… 
Upper-body kinesthetic teaching of a free-standing humanoid robot
TLDR
An integrated approach allowing a free-standing humanoid robot to acquire new motor skills by kinesthetic teaching and a hybrid position/force controller to apply the learned trajectories in terms of positions and forces to the end effector is presented.
Humanoid robot motion creation based on touch interpretation: A new programming paradigm
TLDR
In order to ease the teaching process, this thesis proposes a new paradigm, termed ``teaching by touching', which consists in having the robot interpret the meaning of the tactile instructions it receives and move based on its own understanding of the user's will, instead of limiting its behavior to a mere passive movement.
Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective
TLDR
This paper considers an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill and introduces a hybrid method that combines trajectories and keyframes in a single demonstration.
Human-robot interface for instructing industrial tasks using kinesthetic teaching
TLDR
A human-robot interface based on task-level-programming and kinesthetic teaching and in evaluation of the tests several improvements to the HRI are proposed, while the underlying concept is found to simplify programming of industrial task and thus making this available to the production floor operator.
Usability of force-based controllers in physical human-robot interaction
TLDR
There is a need for a trade-off between the conflicting goals of naturalness of motion and positioning accuracy in physical human-robot interaction, in the context of a human physically guiding a robot through the desired set of motions.
Learning of compliant human–robot interaction using full-body haptic interface
TLDR
A novel approach where a human demonstrator can intuitively teach robot full-body skills and a novel method that transforms robot’s sensory readings into feedback appropriate for the human is developed.
Teaching robots to cooperate with humans in dynamic manipulation tasks based on multi-modal human-in-the-loop approach
TLDR
An approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner using human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task.
Cooperative human-robot control based on Fitts' law
TLDR
A novel method for on-line adaptation of robotic trajectories, where humans and robots are autonomous agents coupled through physical interaction, for example through manipulated object is proposed.
Shared Control for Human-Robot Cooperative Manipulation Tasks
TLDR
An approach where the speed-accuracy trade-off model of a human together with the robotic partner can be exploited and the performance can be improved in a human-robot cooperative setup is proposed.
...
...

References

SHOWING 1-9 OF 9 REFERENCES
Learning to Walk through Imitation
TLDR
This paper provides the first demonstration that a humanoid robot can learn to walk directly by imitating a human gait obtained from motion capture (mocap) data, and proposes a new modelfree approach to tractable imitation-based learning in humanoids.
Dynamical System Modulation for Robot Learning via Kinesthetic Demonstrations
TLDR
This system allows a robot to learn a simple goal-directed gesture and correctly reproduce it despite changes in the initial conditions and perturbations in the environment and provides a solution to the inverse kinematics problem when dealing with a redundant manipulator.
Codevelopmental Learning Between Human and Humanoid Robot Using a Dynamic Neural-Network Model
TLDR
Experimental results and the analyses showed that codevelopmental shaping of task behaviors stems from interactions between the robot and a tutor, and dynamic structures for articulating and sequencing of behavior primitives are self-organized in the hierarchically organized network.
Imitation in Animals and Artifacts
The effort to explain the imitative abilities of humans and other animals draws on fields as diverse as animal behavior, artificial intelligence, computer science, comparative psychology,
curlybot: designing a new class of computational toys
TLDR
An educational toy, called curlybot, is introduced as the basis for a new class of toys aimed at children in their early stages of development — ages four and up, which can use curlybot to develop intuitions for advanced mathematical and computational concepts through play away from a traditional computer.
From turtles to Tangible Programming Bricks: explorations in physical language design
TLDR
The article describes and discusses the author’s own research into tangible programming, culminating in the development of the Tangible Programming Bricks system—a platform for creating microworlds for children to explore computation and scientific thinking.
The correspondence problem
The correspondence problem. In: Imitation in animals and artifacts
  • The correspondence problem. In: Imitation in animals and artifacts
  • 2002