Using dialog and human observations to dictate tasks to a learning robot assistant
For humanoid robots to be accepted as partners with humans they will be expected to learn quickly and adapt to environment changes in ways similar to humans. Within a single task there may be many things that can change as humans interact with the robot. The placement of items within the workspace, the physics of the environment, and the reaction of the human to the robot’s movements are some examples of the things which can vary during the interaction. This paper presents a method for humanoid robots to quickly learn new dynamic tasks from observing others and from practice. Ways in which the robot can adapt to the initial and slowly changing environment conditions are also described. Agents are given domain knowledge in the form of task primitives. A key element of our approach is to break learning problems up into as many simple learning problems as possible. This process of “divide and conquer” is limited only by the measurements available to the robot. We present a case study of a humanoid robot learning to play air hockey.