Shared Autonomy with Learned Latent Actions

  title={Shared Autonomy with Learned Latent Actions},
  author={Hong Jun Jeon and Dylan P. Losey and Dorsa Sadigh},
Assistive robots enable people with disabilities to conduct everyday tasks on their own. However, these tasks can be complex, containing both coarse reaching motions and fine-grained manipulation. For example, when eating, not only does one need to move to the correct food item, but they must also precisely manipulate the food in different ways (e.g., cutting, stabbing, scooping). Shared autonomy methods make robot teleoperation safer and more precise by arbitrating user inputs with robot… 

Figures from this paper

Learning Visually Guided Latent Actions for Assistive Teleoperation

This work develops assistive robots that condition their latent embeddings on visual inputs and indicates that structured visual representations improve few-shot performance and are subjectively preferred by users.

Learning to Share Autonomy from Repeated Human-Robot Interaction

The insight is that operators repeat important tasks on a daily basis and take advantage of these repeated interactions to learn assistive policies, and results indicate that learning shared autonomy across repeated interactions (SARI) matches existing approaches for known goals, and outperforms the baselines on tasks that were never specified beforehand.

Learning to Share Autonomy Across Repeated Interaction

This paper proposes a learning approach to shared autonomy that takes advantage of repeated interactions and introduces an algorithm that exploits these repeated interactions to recognize the human’s task, replicate similar demonstrations, and return control when unsure.

Learning latent actions to control assistive robots

This work finds that intuitive, user-friendly control of assistive robots can be achieved by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions .

Shared Autonomy for Robotic Manipulation with Language Corrections

This work presents a method for incorporating language corrections, built on the insight that an initial instruction and subsequent corrections differ mainly in the amount of grounded context needed, to focus on manipulation domains where the sample efficiency of existing work is prohibitive.

Assistive Teaching of Motor Control Tasks to Humans

An AI-assisted teaching algorithm that leverages skill discovery methods from reinforcement learning to break down any motor control task into teachable skills, construct novel drill sequences, and individualize curricula to students with different capabilities is proposed.

Assistive Tele-op: Leveraging Transformers to Collect Robotic Task Demonstrations

Assistive Tele-op is presented, a virtual reality system for collecting robot task demonstrations that displays an autonomous trajectory forecast to communicate the robot’s intent and is powered by transformers, which can provide a window of potential states and actions far into the future – with almost no added computation time.

Learning Human Objectives from Sequences of Physical Corrections

An auxiliary reward is introduced that captures the human's trade-off between making corrections which improve the robot’s immediate reward and long-term performance and indicates that users are best able to convey their objective when the robot reasons over their sequence of corrections.

"No, to the Right" - Online Language Corrections for Robotic Manipulation via Shared Autonomy

This work presents a framework for incorporating and adapting to natural language corrections – “to the right”, or “no, towards the book” – as the robot executes, and shows that this corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users because of its reliability, precision, and ease of use.

Human-AI Shared Control via Policy Dissection

The experiments show that human-AI shared control system achieved by Policy Dissection in driving task can substantially improve the performance and safety in unseen traffic scenes and suggest the promising direction of implementing human- AI shared autonomy through interpreting the learned representation of the autonomous agents.



Controlling Assistive Robots with Learned Latent Actions

A teleoperation algorithm for assistive robots that learns latent actions from task demonstrations is designed, and the controllability, consistency, and scaling properties that user-friendly latent actions should have are formulated, and how different lowdimensional embeddings capture these properties are evaluated.

Shared Autonomy via Deep Reinforcement Learning

This paper uses human-in-the-loop reinforcement learning with neural network function approximation to learn an end-to-end mapping from environmental observation and user input to agent action, with task reward as the only form of supervision.

Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics

The study results show that the approach in many scenarios outperforms existing solutions for intent inference in assistive teleoperation and otherwise performs comparably and the underlying intent inference approach directly affects shared autonomy performance, as do control interface limitations.

Eye-Hand Behavior in Human-Robot Shared Manipulation

This work conducts a data collection study that uses an eye tracker to record eye gaze during a human-robot shared manipulation activity, both with and without shared autonomy assistance, and lays a foundation for a model of natural human eye gaze in human- robotic manipulators.

A policy-blending formalism for shared control

This work proposes an intuitive formalism that captures assistance as policy blending, illustrates how some of the existing techniques for shared control instantiate it, and provides a principled analysis of its main components: prediction of user intent and its arbitration with the user input.

Anticipatory robot control for efficient human-robot collaboration

An anticipatory control method is presented that enables robots to proactively perform task actions based on anticipated actions of their human partners, and is implemented into a robot system that monitored its user's gaze, predicted his or her task intent based on observed gaze patterns, and performed anticipatory task actions according to its predictions.

Learning Latent Plans from Play

Play-LMP is introduced, a method designed to handle variability in the LfP setting by organizing it in an embedding space and finding that play-supervised models, unlike their expert-trained counterparts, are more robust to perturbations and exhibit retrying-till-success.

Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state

This work introduces a formulation of interaction with human-driven vehicles as an underactuated dynamical system, in which the robot’s actions have consequences on the state of the autonomous car, but also on the human actions and thus thestate of the human- driven car.

Shared autonomy via hindsight optimization for teleoperation and teaming

This work modeling shared autonomy as a partially observable Markov decision process (POMDP), providing assistance that minimizes the expected cost-to-go with an unknown goal, applies to both shared-control teleoperation and human–robot teaming.

State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction

It is shown that the method extracts task-relevant state representations from highdimensional observations, even in the presence of task-irrelevant distractions, and that the state representations learned by the method greatly improve generalization in reinforcement learning.