Asynchronous Methods for Deep Reinforcement Learning

@inproceedings{Mnih2016AsynchronousMF,
  title={Asynchronous Methods for Deep Reinforcement Learning},
  author={Volodymyr Mnih and Adri{\`a} Puigdom{\`e}nech Badia and Mehdi Mirza and Alex Graves and Timothy P. Lillicrap and Tim Harley and David Silver and Koray Kavukcuoglu},
  booktitle={ICML},
  year={2016}
}
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses… CONTINUE READING
Highly Influential
This paper has highly influenced 285 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 1,891 citations. REVIEW CITATIONS
Related Discussions
This paper has been referenced on Twitter 444 times. VIEW TWEETS

Citations

Publications citing this paper.
Showing 1-10 of 995 extracted citations

Accelerated Methods for Deep Reinforcement Learning

ArXiv • 2018
View 6 Excerpts
Method Support
Highly Influenced

Active Neural Localization

View 7 Excerpts
Method Support
Highly Influenced

1,892 Citations

0500100020152016201720182019
Citations per Year
Semantic Scholar estimates that this publication has 1,892 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 30 references

Prioritized Experience Replay

View 5 Excerpts
Highly Influenced

Reinforcement Learning: An Introduction

IEEE Transactions on Neural Networks • 1988
View 5 Excerpts
Highly Influenced

End-to-end training of deep visuomotor policies

Levine, Sergey, +5 authors Pieter
arXiv preprint arXiv:1504.00702, • 2015
View 1 Excerpt