Corpus ID: 204851795

RoboNet: Large-Scale Multi-Robot Learning

@article{Dasari2019RoboNetLM,
  title={RoboNet: Large-Scale Multi-Robot Learning},
  author={Sudeep Dasari and F. Ebert and Stephen Tian and Suraj Nair and Bernadette Bucher and K. Schmeckpeper and Siddharth Singh and Sergey Levine and Chelsea Finn},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.11215}
}
Robot learning has emerged as a promising tool for taming the complexity and diversity of the real world. Methods based on high-capacity models, such as deep networks, hold the promise of providing effective generalization to a wide range of open-world environments. However, these same methods typically require large amounts of diverse training data to generalize effectively. In contrast, most robotic learning experiments are small-scale, single-domain, and single-robot. This leads to a… Expand
Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos
TLDR
This work proposes a simple approach, Domain-agnostic Video Discriminator (DVD), that learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task, and can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos. Expand
Multi-Robot Deep Reinforcement Learning for Mobile Navigation
TLDR
This work proposes a deep reinforcement learning algorithm with hierarchically integrated models (HInt) that allows the algorithm to train on datasets gathered by a variety of different platforms, while respecting the physical capabilities of the deployment robot at test time. Expand
Multi-Robot Deep Reinforcement Learning via Hierarchically Integrated Models
Deep reinforcement learning algorithms require large and diverse datasets in order to learn successful perception-based control policies. However, gathering such datasets with a single robot can beExpand
How to train your robot with deep reinforcement learning: lessons we have learned
TLDR
The goal is to provide a resource both for roboticists and machine learning researchers who are interested in furthering the progress of deep RL in the real world by presenting a number of case studies involving robotic deep RL. Expand
Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms
TLDR
The proposed method can successfully adapt a trained policy to different robotic platforms with novel physical parameters and the superiority of the metalearning algorithm compared to state-of-the-art methods for the introduced few-shot policy adaptation problem is demonstrated. Expand
BC-0: Zero-Shot Task Generalization with Robotic Imitation Learning
  • 2021
In this paper, we study the problem of enabling a vision-based robotic 1 manipulation system to generalize to novel tasks, a long-standing challenge in 2 robot learning. We approach the challengeExpand
RLBench: The Robot Learning Benchmark & Learning Environment
TLDR
This large-scale benchmark aims to accelerate progress in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning. Expand
DeepClaw: A Robotic Hardware Benchmarking Platform for Learning Object Manipulation*
TLDR
This work proposes a hierarchical pipeline of software integration, including localization, recognition, grasp planning, and motion planning, to streamline learning-based robot control, data collection, and experiment validation towards shareability and reproducibility. Expand
Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation
TLDR
This paper demonstrates how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning, including changes in background, object shape and appearance, lighting conditions, and robot morphology. Expand
ManiSkill: Learning-from-Demonstrations Benchmark for Generalizable Manipulation Skills
TLDR
This work focuses on object-level generalization and proposes SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill), a large-scale learning-from-demonstrations benchmark for articulated object manipulation with visual input (point cloud and image). Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 64 REFERENCES
Task-Embedded Control Networks for Few-Shot Imitation Learning
TLDR
Task-Embedded Control Networks are introduced, which employ ideas from metric learning in order to create a task embedding that can be used by a robot to learn new tasks from one or more demonstrations, and which surpass the performance of a state-of-the-art method when using only visual information from each demonstration. Expand
Deep visual foresight for planning robot motion
TLDR
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training. Expand
One-Shot Visual Imitation Learning via Meta-Learning
TLDR
A meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration, and requires data from significantly fewer prior tasks for effective learning of new skills. Expand
Learning modular neural network policies for multi-task and multi-robot transfer
TLDR
The effectiveness of the transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks is demonstrated. Expand
Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised Learning
TLDR
A self-supervised algorithm for learning image registration, which can keep track of objects of interest for the duration of the trial, and can be combined with a video-prediction based controller to enable complex behaviors to be learned from scratch using only raw visual inputs. Expand
Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation
TLDR
This paper presents the largest available robotic-demonstration dataset (MIME) that contains 8260 human-robot demonstrations over 20 different robotic tasks (this https URL) and proposes to use this dataset for the task of mapping 3rd person video features to robot trajectories. Expand
Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
TLDR
It is demonstrated that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model. Expand
ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation
TLDR
It is shown that the data obtained through RoboTurk enables policy learning on multi-step manipulation tasks with sparse rewards and that using larger quantities of demonstrations during policy learning provides benefits in terms of both learning consistency and final performance. Expand
Time Reversal as Self-Supervision
TLDR
This work introduces the time-reversal model (TRM), a self-supervised model which explores outward from a set of goal states and learns to predict these trajectories in reverse, allowing for complex manipulation tasks with no demonstrations or exploration at test time. Expand
Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours
TLDR
This paper takes the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts, which allows us to train a Convolutional Neural Network for the task of predicting grasp locations without severe overfitting. Expand
...
1
2
3
4
5
...