#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning

@inproceedings{Tang2017ExplorationAS,
  title={#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning},
  author={Haoran Tang and Rein Houthooft and Davis Foote and Adam Stooke and Xi Chen and Yan Duan and John Schulman and Filip De Turck and Pieter Abbeel},
  booktitle={NIPS},
  year={2017}
}
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on… CONTINUE READING
Highly Cited
This paper has 61 citations. REVIEW CITATIONS