Learning Multi-Arm Manipulation Through Collaborative Teleoperation

@article{Tung2021LearningMM,
  title={Learning Multi-Arm Manipulation Through Collaborative Teleoperation},
  author={Albert Tung and J. Wong and Ajay Mandlekar and Roberto Mart'in-Mart'in and Yuke Zhu and Li Fei-Fei and Silvio Savarese},
  journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021},
  pages={9212-9219}
}
  • Albert Tung, J. Wong, +4 authors S. Savarese
  • Published 12 December 2020
  • Computer Science
  • 2021 IEEE International Conference on Robotics and Automation (ICRA)
Imitation Learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks by allowing them to learn from human demonstrations collected via teleoperation, but has mostly been limited to single-arm manipulation. However, many real-world tasks require multiple arms, such as lifting a heavy object or assembling a desk. Unfortunately, applying IL to multi-arm manipulation tasks has been challenging –asking a human to control more than one robotic arm can impose significant… 

Figures and Tables from this paper

Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation
TLDR
This work proposes MOBILE MANIPULATION ROBOTURK (MOMART), a novel teleoperation framework allowing simultaneous navigation and manipulation of mobile manipulators, and proposes a learned error detection system to address covariate shift by detecting when an agent is in a potential failure state.
What Matters in Learning from Offline Human Demonstrations for Robot Manipulation
TLDR
This study analyzes the most critical challenges when learning from offline human data for manipulation and highlights opportunities for learning from human datasets, such as the ability to learn proficient policies on challenging, multi-stage tasks beyond the scope of current reinforcement learning methods.
What Matters in Learning from Offline Human Demonstrations for Robot Manipulation
Imitating human demonstrations is a promising approach to endow 1 robots with various manipulation capabilities. While recent advances have been 2 made in imitation learning and batch (offline)
Disentangled Attention as Intrinsic Regularization for Bimanual Multi-Object Manipulation
TLDR
Experimental results show that the proposed intrinsic regularization successfully avoids domination and reduces conflicts for the policies, which leads to significantly more effective cooperative strategies than all the baselines.
Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation
TLDR
This work presents a bottomup approach to learning a library of reusable skills from unsegmented demonstrations and uses these skills to synthesize prolonged robot behaviors to solve real-world long-horizon manipulation tasks.
DAIR: Disentangled Attention Intrinsic Regularization for Safe and Efficient Bimanual Manipulation
TLDR
Experimental results show that the proposed intrinsic regularization successfully avoids domination and reduces conflicts for the policies, which leads to significantly more efficient and safer cooperative strategies than all the baselines.
Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration
TLDR
Across a 2D strategy game, a human-robot handover task, and a multi-step collaborative manipulation task, the method outperforms the alternatives in both simulated evaluations and when executing the tasks with a real human operator inthe-loop.
Bottom-up Discovery of Reusable Sensorimotor Skills from Unstructured Demonstrations
  • 2021
We present a bottom-up approach to discovering reusable sensorimo1 tor skills from unstructured demonstrations. Our approach uses unsupervised ag2 glomerative clustering to construct a hierarchical
Single RGB-D Camera Teleoperation for General Robotic Manipulation
TLDR
It is hypothesized that lowering the barrier of entry to teleoperation will allow for wider deployment of supervised autonomy system, which will in turn generates realistic datasets that unlock the potential of machine learning for robotic manipulation.

References

SHOWING 1-10 OF 42 REFERENCES
ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation
TLDR
It is shown that the data obtained through RoboTurk enables policy learning on multi-step manipulation tasks with sparse rewards and that using larger quantities of demonstrations during policy learning provides benefits in terms of both learning consistency and final performance.
Shared-Autonomy Control for Intuitive Bimanual Tele-Manipulation
TLDR
The user can autonomously choose between the two robots, and if the new one is selected the robots move in a coordinated way, in which desired positions are extrapolated from the movements and gestures of just one users arm.
Learning bimanual end-effector poses from demonstrations using task-parameterized dynamical systems
TLDR
This paper presents a framework that allows robots to learn the full poses of their end-effectors in a task-parameterized manner and validate the approach with an experiment in which two 7-DoF WAM robots learn to perform a bimanual sweeping task.
Towards learning hierarchical skills for multi-phase manipulation tasks
TLDR
This paper presents an approach for exploiting the phase structure of tasks in order to learn manipulation skills more efficiently and was successfully evaluated on a real robot performing a bimanual grasping task.
Efficient Bimanual Manipulation Using Learned Task Schemas
TLDR
It is shown that explicitly modeling the schema’s state-independence can yield significant improvements in sample efficiency for model-free reinforcement learning algorithms and can be transferred to solve related tasks, by simply re-learning the parameterizations with which the skills are invoked.
Multi-robot inverse reinforcement learning under occlusion with interactions
TLDR
This work extends single-agent inverse reinforcement learning (IRL) to a multi-robot setting and partial observability, and models the interaction between the mobile robots as equilibrium behavior, and derives a Markov decision process based policy for each other robot.
Programming by demonstration: dual-arm manipulation tasks for humanoid robots
  • R. Zöllner, T. Asfour, R. Dillmann
  • Computer Science
    2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566)
  • 2004
TLDR
A classification for dual-arm manipulations is introduced, enabling a segmentation of tasks into adequate subtasks, and it is shown how the generated programs are mapped on and executed by a humanoid robot.
Exploiting Symmetries in Reinforcement Learning of Bimanual Robotic Tasks
TLDR
A symmetrization method for ProMPs is presented and used to represent two movements, employing a single ProMP for the first arm and a symmetry surface that maps that ProMP to the second arm, which is adopted in reinforcement learning of bimanual tasks using relative entropy policy search algorithm.
Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation
TLDR
It is described how consumer-grade Virtual Reality headsets and hand tracking hardware can be used to naturally teleoperate robots to perform complex tasks and how imitation learning can learn deep neural network policies that can acquire the demonstrated skills.
IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data
TLDR
This paper proposes Implicit Reinforcement without Interaction at Scale (IRIS), a novel framework for learning from large-scale demonstration datasets that factorizes the control problem into a goal-conditioned low-level controller that imitates short demonstration sequences and a high-level goal selection mechanism that sets goals for the low- level and selectively combines parts of suboptimal solutions leading to more successful task completions.
...
1
2
3
4
5
...