Corpus ID: 221470196

Sample-Efficient Automated Deep Reinforcement Learning

@article{Franke2021SampleEfficientAD,
  title={Sample-Efficient Automated Deep Reinforcement Learning},
  author={J. Franke and Gregor Koehler and Andr{\'e} Biedenkapp and F. Hutter},
  journal={ArXiv},
  year={2021},
  volume={abs/2009.01555}
}
Despite significant progress in challenging problems across various domains, applying state-of-the-art deep reinforcement learning (RL) algorithms remains challenging due to their sensitivity to the choice of hyperparameters. This sensitivity can partly be attributed to the non-stationarity of the RL problem, potentially requiring different hyperparameter settings at various stages of the learning process. Additionally, in the RL setting, hyperparameter optimization (HPO) requires a large… Expand

Figures and Tables from this paper

Evolving Reinforcement Learning Algorithms
Towards Automatic Actor-Critic Solutions to Continuous Control

References

SHOWING 1-10 OF 40 REFERENCES
Evolution-Guided Policy Gradient in Reinforcement Learning
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Collaborative Evolutionary Reinforcement Learning
Deep Reinforcement Learning that Matters
Population Based Training of Neural Networks
How to Discount Deep Reinforcement Learning: Towards New Dynamic Strategies
Proximal Distilled Evolutionary Reinforcement Learning
Continuous control with deep reinforcement learning
...
1
2
3
4
...