Share This Author
Human-level control through deep reinforcement learning
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Overcoming catastrophic forgetting in neural networks
- J. Kirkpatrick, Razvan Pascanu, R. Hadsell
- Computer ScienceProceedings of the National Academy of Sciences
- 2 December 2016
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Progressive Neural Networks
This work evaluates this progressive networks architecture extensively on a wide variety of reinforcement learning tasks, and demonstrates that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
Meta-Learning with Latent Embedding Optimization
This work shows that latent embedding optimization can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks, and indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.
Neural scene representation and rendering
The Generative Query Network (GQN) is introduced, a framework within which machines learn to represent scenes using only their own sensors, demonstrating representation learning without human labels or domain knowledge.
A novel method called policy distillation is presented that can be used to extract the policy of a reinforcement learning agent and train a new network that performs at the expert level while being dramatically smaller and more efficient.
PathNet: Evolution Channels Gradient Descent in Super Neural Networks
Successful transfer learning is demonstrated; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning.
DARLA: Improving Zero-Shot Transfer in Reinforcement Learning
A new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act, which significantly outperforms conventional baselines in zero-shot domain adaptation scenarios.
Continual Unsupervised Representation Learning
- Dushyant Rao, Francesco Visin, Andrei A. Rusu, Y. Teh, Razvan Pascanu, R. Hadsell
- Computer ScienceNeurIPS
- 1 October 2019
The proposed approach (CURL) performs task inference directly within the model, is able to dynamically expand to capture new concepts over its lifetime, and incorporates additional rehearsal-based techniques to deal with catastrophic forgetting.
Sim-to-Real Robot Learning from Pixels with Progressive Nets
- Andrei A. Rusu, Matej Vecerík, Thomas Rothörl, N. Heess, Razvan Pascanu, R. Hadsell
- Computer ScienceCoRL
- 13 October 2016
This work proposes using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world, and presents an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging thereality gap.