Corpus ID: 1923568

Policy Distillation

@article{Rusu2016PolicyD,
  title={Policy Distillation},
  author={Andrei A. Rusu and Sergio Gomez Colmenarejo and Çaglar G{\"u}lçehre and G. Desjardins and J. Kirkpatrick and Razvan Pascanu and V. Mnih and K. Kavukcuoglu and R. Hadsell},
  journal={CoRR},
  year={2016},
  volume={abs/1511.06295}
}
Abstract: Policies for complex visual tasks have been successfully learned with deep reinforcement learning, using an approach called deep Q-networks (DQN), but relatively large (task-specific) networks and extensive training are needed to achieve good performance. [...] Key Method Furthermore, the same method can be used to consolidate multiple task-specific policies into a single policy. We demonstrate these claims using the Atari domain and show that the multi-task distilled agent outperforms the single-task…Expand
290 Citations

Figures, Tables, and Topics from this paper

Knowledge Transfer for Deep Reinforcement Learning with Hierarchical Experience Replay
  • 50
  • Highly Influenced
  • PDF
Pre-training with non-expert human demonstration for deep reinforcement learning
  • 11
  • PDF
Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
  • 25
  • PDF
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
  • 148
  • PDF
DisCoRL: Continual Reinforcement Learning via Policy Distillation
  • 17
  • Highly Influenced
  • PDF
Periodic Intra-Ensemble Knowledge Distillation for Reinforcement Learning
  • 1
  • PDF
Initial Progress in Transfer for Deep Reinforcement Learning Algorithms
  • 11
  • PDF
Distral: Robust multitask reinforcement learning
  • 256
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 30 REFERENCES
Playing Atari with Deep Reinforcement Learning
  • 5,013
  • PDF
Massively Parallel Methods for Deep Reinforcement Learning
  • 299
  • PDF
Deep Reinforcement Learning with Double Q-Learning
  • 2,548
  • PDF
Human-level control through deep reinforcement learning
  • 12,031
  • PDF
FitNets: Hints for Thin Deep Nets
  • 1,407
  • PDF
Recurrent neural network training with dark knowledge transfer
  • 78
  • PDF
Knowledge Transfer Pre-training
  • 17
  • PDF
Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning
  • 286
  • PDF
Distilling the Knowledge in a Neural Network
  • 5,686
  • Highly Influential
  • PDF
A Dozen Tricks with Multitask Learning
  • R. Caruana
  • Computer Science
  • Neural Networks: Tricks of the Trade
  • 2012
  • 15
...
1
2
3
...