Corpus ID: 59606283

Distilling Policy Distillation

  title={Distilling Policy Distillation},
  author={W. Czarnecki and Razvan Pascanu and Simon Osindero and Siddhant M. Jayakumar and G. Swirszcz and Max Jaderberg},
  • W. Czarnecki, Razvan Pascanu, +3 authors Max Jaderberg
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • The transfer of knowledge from one policy to another is an important tool in Deep Reinforcement Learning. This process, referred to as distillation, has been used to great success, for example, by enhancing the optimisation of agents, leading to stronger performance faster, on harder domains [26, 32, 5, 8]. Despite the widespread use and conceptual simplicity of distillation, many different formulations are used in practice, and the subtle variations between them can often drastically change… CONTINUE READING
    23 Citations

    Figures, Tables, and Topics from this paper.

    Evolutionary Stochastic Policy Distillation
    Dual Policy Distillation
    • 3
    • PDF
    Meta Automatic Curriculum Learning
    Transfer Learning in Deep Reinforcement Learning: A Survey
    Automatic Curriculum Learning For Deep RL: A Short Survey
    • 10
    • PDF


    Distral: Robust multitask reinforcement learning
    • 228
    • PDF
    Reinforcement Learning with Unsupervised Auxiliary Tasks
    • 642
    • PDF
    Divide-and-Conquer Reinforcement Learning
    • 53
    • PDF
    Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
    • 316
    • PDF
    Policy Optimization by Genetic Distillation
    • 15
    • PDF
    Successor Features for Transfer in Reinforcement Learning
    • 205
    • PDF
    Evolution Strategies as a Scalable Alternative to Reinforcement Learning
    • 680
    • PDF
    Kickstarting Deep Reinforcement Learning
    • 40
    • PDF
    IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
    • 476
    • PDF