Integral Reinforcement Learning for Continuous-Time Input-Affine Nonlinear Systems With Simultaneous Invariant Explorations

@article{Lee2015IntegralRL,
  title={Integral Reinforcement Learning for Continuous-Time Input-Affine Nonlinear Systems With Simultaneous Invariant Explorations},
  author={Jae Young Lee and Jin Bae Park and Yoon Ho Choi},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2015},
  volume={26},
  pages={916-932}
}
This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of… CONTINUE READING
Highly Cited
This paper has 84 citations. REVIEW CITATIONS
18 Citations
34 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 18 extracted citations

84 Citations

02040602015201620172018
Citations per Year
Semantic Scholar estimates that this publication has 84 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 34 references

Apr.). Invariantly admissible policy iteration for a class of nonlinear optimal control problems

  • J. Y. Lee, J. B. Park, Y. H. Choi
  • Syst. Control Lett. [Online]. Available:
  • 2014
Highly Influential
10 Excerpts

Similar Papers

Loading similar papers…