Continuous Deep Q-Learning with Model-based Acceleration

@inproceedings{Gu2016ContinuousDQ,
  title={Continuous Deep Q-Learning with Model-based Acceleration},
  author={Shixiang Gu and Timothy P. Lillicrap and Ilya Sutskever and Sergey Levine},
  booktitle={ICML},
  year={2016}
}
Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of modelfree algorithms, particularly when using highdimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for… CONTINUE READING
Highly Influential
This paper has highly influenced 26 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 296 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 198 extracted citations

296 Citations

0100200201620172018
Citations per Year
Semantic Scholar estimates that this publication has 296 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 35 references

A survey on policy search for robotics

  • Deisenroth, Marc Peter, Neumann, Gerhard, Peters, Jan
  • Foundations and Trends in Robotics,
  • 2013
Highly Influential
4 Excerpts

End-to-end training of deep visuomotor policies

  • Levine, Sergey, +5 authors Pieter
  • arXiv preprint arXiv:1504.00702,
  • 2015
Highly Influential
4 Excerpts

Similar Papers

Loading similar papers…