I Know What You Meant: Learning Human Objectives by (Under)estimating Their Choice Set

@article{Jonnavittula2021IKW,
  title={I Know What You Meant: Learning Human Objectives by (Under)estimating Their Choice Set},
  author={Ananth Jonnavittula and Dylan P. Losey},
  journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021},
  pages={2747-2753}
}
Assistive robots have the potential to help people perform everyday tasks. However, these robots first need to learn what it is their user wants them to do. Teaching assistive robots is hard for inexperienced users, elderly users, and users living with physical disabilities, since often these individuals are unable to show the robot their desired behavior. We know that inclusive learners should give human teachers credit for what they cannot demonstrate. But today’s robots do the opposite: they… 

Figures from this paper

Here’s What I’ve Learned: Asking Questions that Reveal Reward Learning
TLDR
It is found that robots which consider the human’s point of view learn just as quickly as state-of-the-art baselines while also communicating what they have learned to the human operator.
Theory of Mind and Preference Learning at the Interface of Cognitive Science, Neuroscience, and AI: A Review
TLDR
This review synthesizes the existing understanding of ToM in cognitive and neurosciences and the AI computational models that have been proposed and focuses on preference learning as an area of particular interest and the most recent neurocognitive and computational ToM models.
Physical Interaction as Communication: Learning Robot Objectives Online from Human Corrections
TLDR
This article argues that when pHRI is intentional it is also informative: the robot can leverage interactions to learn how it should complete the rest of its current task even after the person lets go, and improves the efficiency of robot learning from pHRI by reducing unintended learning.

References

SHOWING 1-10 OF 34 REFERENCES
Quantifying Hypothesis Space Misspecification in Learning From Human–Robot Demonstrations and Physical Corrections
TLDR
It is posited that the robot should reason explicitly about how well it can explain human inputs given its hypothesis space and use that situational confidence to inform how it should incorporate the human input.
Learning from Physical Human Corrections, One Feature at a Time
TLDR
The approach allows the human-robot team to focus on learning one feature at a time, unlike state-of-the-art techniques that update all features at once, and suggests that users teaching one-at-a-time perform better, especially in tasks that require changing multiple features.
Donut as I do: Learning from failed demonstrations
TLDR
Instead of maximizing the similarity of generated behaviors to those of the demonstrators, this work examines two methods that deliberately avoid repeating the human's mistakes.
Active Preference-Based Learning of Reward Functions
TLDR
This work builds on work in label ranking and proposes to learn from preferences (or comparisons) instead: the person provides the system a relative preference between two trajectories, and takes an active learning approach, in which the system decides on what preference queries to make.
Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior
TLDR
This paper model suboptimal behavior as the result of internal model misspecification, and demonstrates that this approach enables us to more accurately model human intent, and can be used in a variety of applications, including offering assistance in a shared autonomy framework and inferring human preferences.
Choice Set Misspecification in Reward Inference
TLDR
This work introduces the idea that the choice set itself might be difficult to specify, and analyzes choice set misspecification: what happens as the robot makes incorrect assumptions about the set of choices from which the human selects their feedback.
An Algorithmic Perspective on Imitation Learning
TLDR
This work provides an introduction to imitation learning, dividing imitation learning into directly replicating desired behavior and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]).
Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations
TLDR
D-REX is the first imitation learning approach to achieve significant extrapolation beyond the demonstrator's performance without additional side-information or supervision, such as rewards or human preferences, and shows that preference-based inverse reinforcement learning can be applied in traditional imitation learning settings where only unlabeled demonstrations are available.
Learning the Preferences of Ignorant, Inconsistent Agents
TLDR
A behavioral experiment in which human subjects perform preference inference given the same observations of choices as the model is presented, showing that human subjects explain choices in terms of systematic deviations from optimal behavior and suggesting that they take such deviations into account when inferring preferences.
Eye-Hand Behavior in Human-Robot Shared Manipulation
TLDR
This work conducts a data collection study that uses an eye tracker to record eye gaze during a human-robot shared manipulation activity, both with and without shared autonomy assistance, and lays a foundation for a model of natural human eye gaze in human- robotic manipulators.
...
1
2
3
4
...