Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark

@article{Riedmiller2007EvaluationOP,
  title={Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark},
  author={Martin A. Riedmiller and Jan Peters and Stefan Schaal},
  journal={2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning},
  year={2007},
  pages={254-261}
}
In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, 'vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is… CONTINUE READING
Highly Influential
This paper has highly influenced 11 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 92 citations. REVIEW CITATIONS

Citations

Publications citing this paper.

93 Citations

01020'09'11'13'15'17
Citations per Year
Semantic Scholar estimates that this publication has 93 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 14 references

and G

  • G. Endo, J. Morimoto, T. Matsubara, J. Nakanishi
  • Ch eng. Learning cpg sensory feedback with…
  • 2005
1 Excerpt

Learn - ing cpg sensory feedback with policy gradient for biped locomotion for a full - body humanoid Feature article : Optimization for simulation : Theory vs . practice

  • J. Franklin Gullapalli, H. Benbrahim
  • INFORMS Journal on Computing
  • 2002

Similar Papers

Loading similar papers…