Corpus ID: 59606283

Distilling Policy Distillation

  title={Distilling Policy Distillation},
  author={W. Czarnecki and Razvan Pascanu and Simon Osindero and Siddhant M. Jayakumar and G. Swirszcz and Max Jaderberg},
  • W. Czarnecki, Razvan Pascanu, +3 authors Max Jaderberg
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • The transfer of knowledge from one policy to another is an important tool in Deep Reinforcement Learning. This process, referred to as distillation, has been used to great success, for example, by enhancing the optimisation of agents, leading to stronger performance faster, on harder domains [26, 32, 5, 8]. Despite the widespread use and conceptual simplicity of distillation, many different formulations are used in practice, and the subtle variations between them can often drastically change… CONTINUE READING
    26 Citations

    Figures, Tables, and Topics from this paper

    Evolutionary Stochastic Policy Distillation
    • PDF
    Dual Policy Distillation
    • 4
    • PDF
    Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation
    • PDF
    Knowledge Transfer in Multi-Task Deep Reinforcement Learning for Continuous Control
    • PDF
    Meta Automatic Curriculum Learning
    • PDF
    Transfer Learning in Deep Reinforcement Learning: A Survey
    • 2
    • PDF


    Distral: Robust multitask reinforcement learning
    • 237
    • PDF
    Reinforcement Learning with Unsupervised Auxiliary Tasks
    • 669
    • PDF
    Divide-and-Conquer Reinforcement Learning
    • 53
    • PDF
    Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
    • 326
    • PDF
    Successor Features for Transfer in Reinforcement Learning
    • 217
    • PDF
    Evolution Strategies as a Scalable Alternative to Reinforcement Learning
    • 720
    • PDF
    Kickstarting Deep Reinforcement Learning
    • 44
    • PDF
    IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
    • 524
    • PDF
    Deep Mutual Learning
    • 364
    • PDF