Corpus ID: 232135324

Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms

@article{Ghadirzadeh2021BayesianMF,
  title={Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms},
  author={A. Ghadirzadeh and X. Chen and Petra Poklukar and Chelsea Finn and M{\aa}rten Bj{\"o}rkman and D. Kragic},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.03697}
}
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform. A policy trained with expensive data is rendered useless after making even a minor change to the robot hardware. In this paper, we address the challenging problem of adapting a policy, trained to perform a task, to a novel robotic hardware platform given only few demonstrations of robot motion trajectories on the target robot. We formulate it as… Expand
1 Citations

Figures from this paper

Reimagining an autonomous vehicle
TLDR
It is argued that a rethink is required, reconsidering the autonomous vehicle problem in the light of the body of knowledge that has been gained since the DARPA challenges, and an alternative vision is presented: a recipe for driving with machine learning, and grand challenges for research in driving. Expand

References

SHOWING 1-10 OF 49 REFERENCES
Hardware Conditioned Policies for Multi-Robot Transfer Learning
TLDR
This work uses the kinematic structure directly as the hardware encoding and shows great zero-shot transfer to completely novel robots not seen during training and demonstrates that fine-tuning the policy network is significantly more sample-efficient than training a model from scratch. Expand
RoboNet: Large-Scale Multi-Robot Learning
TLDR
This paper proposes RoboNet, an open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and studies how it can be used to learn generalizable models for vision-based robotic manipulation. Expand
Meta Reinforcement Learning for Sim-to-real Domain Adaptation
TLDR
This work proposes to address the problem of sim-to-real domain transfer by using meta learning to train a policy that can adapt to a variety of dynamic conditions, and using a task-specific trajectory generation model to provide an action space that facilitates quick exploration. Expand
Learning modular neural network policies for multi-task and multi-robot transfer
TLDR
The effectiveness of the transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks is demonstrated. Expand
One-Shot Visual Imitation Learning via Meta-Learning
TLDR
A meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration, and requires data from significantly fewer prior tasks for effective learning of new skills. Expand
Task-Embedded Control Networks for Few-Shot Imitation Learning
TLDR
Task-Embedded Control Networks are introduced, which employ ideas from metric learning in order to create a task embedding that can be used by a robot to learn new tasks from one or more demonstrations, and which surpass the performance of a state-of-the-art method when using only visual information from each demonstration. Expand
Deep visual foresight for planning robot motion
TLDR
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training. Expand
Probabilistic Model-Agnostic Meta-Learning
TLDR
This paper proposes a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution that is trained via a variational lower bound, and shows how reasoning about ambiguity can also be used for downstream active learning problems. Expand
One-Shot Imitation Learning
TLDR
A meta-learning framework for achieving one-shot imitation learning, where ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. Expand
...
1
2
3
4
5
...