MEC—A Near-Optimal Online Reinforcement Learning Algorithm for Continuous Deterministic Systems

@article{Zhao2015MECANO,
  title={MEC—A Near-Optimal Online Reinforcement Learning Algorithm for Continuous Deterministic Systems},
  author={Dongbin Zhao and Yuanheng Zhu},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2015},
  volume={26},
  pages={346-356}
}
In this paper, the first probably approximately correct (PAC) algorithm for continuous deterministic systems without relying on any system dynamics is proposed. It combines the state aggregation technique and the efficient exploration principle, and makes high utilization of online observed samples. We use a grid to partition the continuous state space into different cells to save samples. A near-upper Q operator is defined to produce a near-upper Q function using samples in each cell. The… CONTINUE READING

Citations

Publications citing this paper.
Showing 1-10 of 25 extracted citations

Data Science

Communications in Computer and Information Science • 2018
View 10 Excerpts
Highly Influenced

Multisource Transfer Double DQN Based on Actor Learning

IEEE Transactions on Neural Networks and Learning Systems • 2018
View 1 Excerpt

Safe Exploration Algorithms for Reinforcement Learning Controllers

IEEE Transactions on Neural Networks and Learning Systems • 2018
View 2 Excerpts

References

Publications referenced by this paper.
Showing 1-10 of 41 references

Reinforcement Learning and Dynamic Programming Using Function Approximators

L. Busoniu, R. Babuska, B. De Schutter, D. Ernst
2010
View 4 Excerpts
Highly Influenced

Similar Papers

Loading similar papers…