Corpus ID: 52942444

Task-Embedded Control Networks for Few-Shot Imitation Learning

@article{James2018TaskEmbeddedCN,
  title={Task-Embedded Control Networks for Few-Shot Imitation Learning},
  author={Stephen James and Michael Bloesch and A. Davison},
  journal={ArXiv},
  year={2018},
  volume={abs/1810.03237}
}
Much like humans, robots should have the ability to leverage knowledge from previously learned tasks in order to learn new tasks quickly in new and unfamiliar environments. Despite this, most robot learning approaches have focused on learning a single task, from scratch, with a limited notion of generalisation, and no way of leveraging the knowledge to learn other tasks more efficiently. One possible solution is meta-learning, but many of the related approaches are limited in their ability to… Expand
BC-0: Zero-Shot Task Generalization with Robotic Imitation Learning
  • 2021
In this paper, we study the problem of enabling a vision-based robotic 1 manipulation system to generalize to novel tasks, a long-standing challenge in 2 robot learning. We approach the challengeExpand
Learning One-Shot Imitation From Humans Without Humans
TLDR
With Task-Embedded Control Networks, the system can infer control polices by embedding human demonstrations that can condition a control policy and achieve one-shot imitation learning with similar results by utilising only simulation data. Expand
Two-Stage Model-Agnostic Meta-Learning With Noise Mechanism for One-Shot Imitation
TLDR
This article proposes a generic meta-learning algorithm that divides the learning process into two independent stages (skill cloning and skill transfer) with a noise mechanism which is compatible with any model. Expand
Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
TLDR
This work proposes a method that can learn to learn from both demonstrations and trial-and-error experience with sparse reward feedback, and can scale to substantially broader distributions of tasks, as the demonstration reduces the burden of exploration. Expand
Learning Multi-Stage Tasks with One Demonstration via Self-Replay
  • 2021
In this work, we introduce a novel method to learn everyday-like multi1 stage tasks from a single human demonstration, without requiring any prior object 2 knowledge. Inspired by the recentExpand
RoboNet: Large-Scale Multi-Robot Learning
TLDR
This paper proposes RoboNet, an open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and studies how it can be used to learn generalizable models for vision-based robotic manipulation. Expand
Modeling Task Uncertainty for Safe Meta-Imitation Learning
TLDR
A novel framework for estimating the task uncertainty through probabilistic inference in the task-embedding space, called PETNet is proposed, which can achieve the same or higher level of performance (success rate of novel tasks at meta-test time) as previous methods. Expand
Scalable Multi-Task Imitation Learning with Autonomous Improvement
TLDR
This work aims to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation. Expand
Demonstration-Conditioned Reinforcement Learning for Few-Shot Imitation
TLDR
This work proposes a novel approach to learning few-shotimitation agents that is called demonstrationconditioned reinforcement learning (DCRL), and shows that DCRL outperforms methods based on behaviour cloning, on navigation tasks and on robotic manipulation tasks from the Meta-World benchmark. Expand
“ Good Robot ! Now Watch This ! ” : Building Generalizable Embeddings via Reinforcement Learning
  • 2021
Modern Reinforcement Learning (RL) algorithms are not sample1 efficient to train on multi-step tasks in complex domains, impeding their wider 2 deployment in the real world. We address this problemExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
One-Shot Visual Imitation Learning via Meta-Learning
TLDR
A meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration, and requires data from significantly fewer prior tasks for effective learning of new skills. Expand
One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning
TLDR
This work presents an approach for one-shot learning from a video of a human by using human and robot demonstration data from a variety of previous tasks to build up prior knowledge through meta-learning, then combining this prior knowledge and only a single video demonstration from a human, the robot can perform the task that the human demonstrated. Expand
End-to-End Training of Deep Visuomotor Policies
TLDR
This paper develops a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors, trained using a partially observed guided policy search method, with supervision provided by a simple trajectory-centric reinforcement learning method. Expand
Learning an Embedding Space for Transferable Robot Skills
3D Simulation for Robot Arm Control with Deep Q-Learning
TLDR
This work presents an approach which uses 3D simulations to train a 7-DOF robotic arm in a control task without any prior knowledge, and presents preliminary results in direct transfer of policies over to a real robot, without any further training. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization
TLDR
This work explores how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems and an efficient sample-based approximation for MaxEnt IOC. Expand
Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation
TLDR
It is described how consumer-grade Virtual Reality headsets and hand tracking hardware can be used to naturally teleoperate robots to perform complex tasks and how imitation learning can learn deep neural network policies that can acquire the demonstrated skills. Expand
Sim-to-Real Reinforcement Learning for Deformable Object Manipulation
TLDR
This work uses a combination of state-of-the-art deep reinforcement learning algorithms to solve the problem of manipulating deformable objects (specifically cloth), and evaluates the approach on three tasks --- folding a towel up to a mark, folding a face towel diagonally, and draping a piece of cloth over a hanger. Expand
Domain randomization for transferring deep neural networks from simulation to the real world
TLDR
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control. Expand
...
1
2
3
4
...