Know Thyself: Transferable Visuomotor Control Through Robot-Awareness
@article{Hu2021KnowTT, title={Know Thyself: Transferable Visuomotor Control Through Robot-Awareness}, author={E. Hu and Kun-Yen Huang and Oleh Rybkin and Dinesh Jayaraman}, journal={ArXiv}, year={2021}, volume={abs/2107.09047} }
Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data. Could we leverage data previously collected on another robot to reduce or even completely remove this need for robot-specific data? We propose a “robot-aware” solution paradigm that exploits readily available robot “self-knowledge” such as proprioception, kinematics, and camera calibration to achieve this. First, we learn modular dynamics models that pair a…
Figures and Tables from this paper
References
SHOWING 1-10 OF 33 REFERENCES
RoboNet: Large-Scale Multi-Robot Learning
- Computer ScienceCoRL
- 2019
This paper proposes RoboNet, an open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and studies how it can be used to learn generalizable models for vision-based robotic manipulation.
Deep visual foresight for planning robot motion
- Computer Science2017 IEEE International Conference on Robotics and Automation (ICRA)
- 2017
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training.
Hardware Conditioned Policies for Multi-Robot Transfer Learning
- Computer ScienceNeurIPS
- 2018
This work uses the kinematic structure directly as the hardware encoding and shows great zero-shot transfer to completely novel robots not seen during training and demonstrates that fine-tuning the policy network is significantly more sample-efficient than training a model from scratch.
Learning Robotic Manipulation through Visual Planning and Acting
- Computer ScienceRobotics: Science and Systems
- 2019
This work learns to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object, and shows that separating the problem into visual planning and visual tracking control is more efficient and more interpretable than alternative data-driven approaches.
Learning modular neural network policies for multi-task and multi-robot transfer
- Computer Science2017 IEEE International Conference on Robotics and Automation (ICRA)
- 2017
The effectiveness of the transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks is demonstrated.
Zero-Shot Visual Imitation
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2018
This workmitating expert demonstration is a powerful mechanism for learning to perform tasks from raw sensory observations by providing multiple demonstrations of a task at training time, and this generates data in the form of observation-action pairs from the agent's point of view.
Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
- Computer ScienceArXiv
- 2018
It is demonstrated that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model.
Unsupervised Visuomotor Control through Distributional Planning Networks
- Computer ScienceRobotics: Science and Systems
- 2019
This work aims to learn an unsupervised embedding space under which the robot can measure progress towards a goal for itself, and enables learning effective and control-centric representations that lead to more autonomous reinforcement learning algorithms.
One-Shot Visual Imitation Learning via Meta-Learning
- Computer ScienceCoRL
- 2017
A meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration, and requires data from significantly fewer prior tasks for effective learning of new skills.
Universal Planning Networks
- Computer ScienceArXiv
- 2018
This work finds that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images.