Corpus ID: 236956615

What Matters in Learning from Offline Human Demonstrations for Robot Manipulation

@article{Mandlekar2021WhatMI,
  title={What Matters in Learning from Offline Human Demonstrations for Robot Manipulation},
  author={Ajay Mandlekar and Danfei Xu and Josiah Wong and Soroush Nasiriany and Chen Wang and Rohun Kulkarni and Li Fei-Fei and Silvio Savarese and Yuke Zhu and Roberto Mart'in-Mart'in},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.03298}
}
Imitating human demonstrations is a promising approach to endow robots with various manipulation capabilities. While recent advances have been made in imitation learning and batch (offline) reinforcement learning, a lack of open-source human datasets and reproducible learning methods make assessing the state of the field difficult. In this paper, we conduct an extensive study of six offline learning algorithms for robot manipulation on five simulated and three real-world multi-stage… Expand

References

SHOWING 1-10 OF 89 REFERENCES
ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation
TLDR
It is shown that the data obtained through RoboTurk enables policy learning on multi-step manipulation tasks with sparse rewards and that using larger quantities of demonstrations during policy learning provides benefits in terms of both learning consistency and final performance. Expand
IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data
TLDR
This paper proposes Implicit Reinforcement without Interaction at Scale (IRIS), a novel framework for learning from large-scale demonstration datasets that factorizes the control problem into a goal-conditioned low-level controller that imitates short demonstration sequences and a high-level goal selection mechanism that sets goals for the low- level and selectively combines parts of suboptimal solutions leading to more successful task completions. Expand
Human-in-the-Loop Imitation Learning using Remote Teleoperation
TLDR
A data collection system tailored to 6-DoF manipulation settings, that enables remote human operators to monitor and intervene on trained policies and outperforms multiple state-of-the-art baselines for learning from the human interventions on a challenging robot threading task and a coffee making task. Expand
A Framework for Efficient Robotic Manipulation
TLDR
It is shown that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels, such as reaching, picking, moving, pulling a large object, flipping a switch, and opening a drawer in just 15-50 minutes of real-world training time. Expand
Visual Imitation Made Easy
TLDR
This work presents an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots and uses commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector. Expand
Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation
TLDR
This paper presents the largest available robotic-demonstration dataset (MIME) that contains 8260 human-robot demonstrations over 20 different robotic tasks (this https URL) and proposes to use this dataset for the task of mapping 3rd person video features to robot trajectories. Expand
One-Shot Visual Imitation Learning via Meta-Learning
TLDR
A meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration, and requires data from significantly fewer prior tasks for effective learning of new skills. Expand
Learning Multi-Arm Manipulation Through Collaborative Teleoperation
TLDR
Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms and collect demonstrations for multi-arm tasks, is presented and a base-residual policy framework is proposed that allows trained policies to better adapt to the mixed coordination setting common in multi- arm manipulation. Expand
RLBench: The Robot Learning Benchmark & Learning Environment
TLDR
This large-scale benchmark aims to accelerate progress in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning. Expand
Benchmark for Skill Learning from Demonstration: Impact of User Experience, Task Complexity, and Start Configuration on Performance
TLDR
This work evaluates four approaches based on properties an end user may desire for real-world tasks, and details how complexity of the task, the expertise of the human demonstrator, and the starting configuration of the robot affect task performance. Expand
...
1
2
3
4
5
...