PAC Optimal Exploration in Continuous Space Markov Decision Processes

@inproceedings{Pazis2013PACOE,
  title={PAC Optimal Exploration in Continuous Space Markov Decision Processes},
  author={Jason Pazis and Ronald Parr},
  booktitle={AAAI},
  year={2013}
}
Current exploration algorithms can be classified in two broad categories: Heuristic, and PAC optimal. While numerous researchers have used heuristic approaches such as -greedy exploration successfully, such approaches lack formal, finite sample guarantees and may need a significant amount of finetuning to produce good results. PAC optimal exploration algorithms, on the other hand, offer strong theoretical guarantees but are inapplicable in domains of realistic size. The goal of this paper is to… CONTINUE READING
Highly Cited
This paper has 60 citations. REVIEW CITATIONS

From This Paper

Figures, tables, and topics from this paper.

Citations

Publications citing this paper.
Showing 1-10 of 40 extracted citations

61 Citations

01020'13'14'15'16'17'18
Citations per Year
Semantic Scholar estimates that this publication has 61 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 18 references

A unifying framework for computational reinforcement learning theory

  • L. Li
  • Ph.D. Dissertation, Rutgers University, New…
  • 2009
1 Excerpt

Similar Papers

Loading similar papers…