• Publications
  • Influence
CHOMP: Covariant Hamiltonian optimization for motion planning
In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradientExpand
A policy-blending formalism for shared control
TLDR
This work proposes an intuitive formalism that captures assistance as policy blending, illustrates how some of the existing techniques for shared control instantiate it, and provides a principled analysis of its main components: prediction of user intent and its arbitration with the user input. Expand
Legibility and predictability of robot motion
TLDR
The findings indicate that for robots to seamlessly collaborate with humans, they must change the way they plan their motion, and a formalism to mathematically define and distinguish predictability and legibility of motion is developed. Expand
Cooperative Inverse Reinforcement Learning
TLDR
It is shown that computing optimal joint policies in CIRL games can be reduced to solving a POMDP, it is proved that optimality in isolation is suboptimal in C IRL, and an approximate CirL algorithm is derived. Expand
Toward seamless human-robot handovers
TLDR
A coordination structure for human-robot handovers is proposed that considers the physical and social-cognitive aspects of the interaction separately and describes how people approach, reach out their hands, and transfer objects while simultaneously coordinating the what, when, and where of handovers. Expand
Active Preference-Based Learning of Reward Functions
TLDR
This work builds on work in label ranking and proposes to learn from preferences (or comparisons) instead: the person provides the system a relative preference between two trajectories, and takes an active learning approach, in which the system decides on what preference queries to make. Expand
Planning for Autonomous Cars that Leverage Effects on Human Actions
TLDR
The user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. Expand
DART: Noise Injection for Robust Imitation Learning
TLDR
A new algorithm is proposed, DART (Disturbances for Augmenting Robot Trajectories), that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot's trained policy during data collection. Expand
SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards
TLDR
This work proposes a simple alternative that still uses RL, but does not require learning a reward function, and can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm, called soft Q imitation learning (SQIL). Expand
Generating Legible Motion
TLDR
A functional gradient optimization technique for autonomously generating legible motion that optimizes a legibility metric inspired by the psychology of action interpretation in humans, resulting in motion trajectories that purposefully deviate from what an observer would expect in order to better convey intent. Expand
...
1
2
3
4
5
...