• Publications
  • Influence
Policy Shaping: Integrating Human Feedback with Reinforcement Learning
This paper introduces Advise, a Bayesian approach that attempts to maximize the information gained from human feedback by utilizing it as direct policy labels and shows that it can outperform state-of-the-art approaches and is robust to infrequent and inconsistent human feedback.
Effects of nonverbal communication on efficiency and robustness in human-robot teamwork
Both self-report via questionnaire and behavioral analysis of video offer evidence to support the hypothesis that implicit non-verbal communication positively impacts human-robot task performance with respect to understandability of the robot, efficiency of task performance, and robustness to errors that arise from miscommunication.
Reinforcement Learning with Human Teachers: Evidence of Feedback and Guidance with Implications for Learning Performance
The importance of understanding the human-teacher/robot-learner system as a whole in order to design algorithms that support how people want to teach while simultaneously improving the robot's learning performance is demonstrated.
Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective
This paper considers an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill and introduces a hybrid method that combines trajectories and keyframes in a single demonstration.
Designing robot learners that ask good questions
  • M. Cakmak, A. Thomaz
  • Computer Science
    7th ACM/IEEE International Conference on Human…
  • 5 March 2012
This paper identifies three types of questions (label, demonstration and feature queries) and discusses how a robot can use these while learning new skills and provides guidelines for designing question asking behaviors on a robot learner.
Robot Learning from Human Teachers
This book provides an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers, and provides best practices for evaluation of LfD systems.
Tutelage and socially guided robot learning
  • A. Thomaz, C. Breazeal
  • Computer Science
    IEEE/RSJ International Conference on Intelligent…
  • 28 September 2004
A learning mechanism is presented, implemented on a humanoid robot, to demonstrate that a collaborative dialog framework allows a robot to efficiently learn a task from a human, generalize this ability to a new task configuration, and show commitment to the overall goal of the learned task.
Keyframe-based Learning from Demonstration
Keyframe-based Learning from Demonstration (KLfD) has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations and it is demonstrated that KLfD may be preferable when demonstration type is suited for the skill.
Teaching and working with robots as a collaboration
This work uses collaborative discourse with accompanying gestures and social cues to teach a humanoid robot a structurally complex task and dynamically mesh its plans with those of its partner, according to the relative capabilities of the human and the robot.