Massively Parallel Methods for Deep Reinforcement Learning

@article{Nair2015MassivelyPM,
  title={Massively Parallel Methods for Deep Reinforcement Learning},
  author={Arun Nair and Praveen Srinivasan and Sam Blackwell and Cagdas Alcicek and Rory Fearon and Alessandro De Maria and Vedavyas Panneershelvam and Mustafa Suleyman and Charles Beattie and Stig Petersen and Shane Legg and Volodymyr Mnih and Koray Kavukcuoglu and David Silver},
  journal={ArXiv},
  year={2015},
  volume={abs/1507.04296}
}
We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement the Deep Q-Network algorithm (DQN). Our distributed algorithm was applied to 49 games from Atari 2600… CONTINUE READING

Similar Papers

Citations

Publications citing this paper.
SHOWING 1-10 OF 141 CITATIONS

Distributed Prioritized Experience Replay

  • ICLR
  • 2018
VIEW 10 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Emergent Solutions to High-Dimensional Multitask Reinforcement Learning

  • Evolutionary Computation
  • 2018
VIEW 6 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Recent progress of deep reinforcement learning : from AlphaGo to AlphaGo Zero

TANG Zhen-tao, Shao Kun, Zhao Dong-bin, ZHU Yuan-heng
  • 2018
VIEW 4 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

Deep Reinforcement Learning with Double Q-learning

VIEW 7 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Metaoptimization on a Distributed System for Deep Reinforcement Learning

VIEW 10 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

StarCraft Micromanagement With Reinforcement Learning and Curriculum Transfer Learning

  • IEEE Transactions on Emerging Topics in Computational Intelligence
  • 2018
VIEW 5 EXCERPTS
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2015
2019

CITATION STATISTICS

  • 14 Highly Influenced Citations

  • Averaged 37 Citations per year from 2017 through 2019

References

Publications referenced by this paper.
SHOWING 1-10 OF 17 REFERENCES

Large Scale Distributed Deep Networks

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Going deeper with convolutions

  • 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2014
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Speech recognition with deep recurrent neural networks

  • 2013 IEEE International Conference on Acoustics, Speech and Signal Processing
  • 2013
VIEW 2 EXCERPTS