Analyzing policy iteration in optimal control

  • Ali Heydari
  • Published 2016 in 2016 American Control Conference (ACC)

Abstract

Policy iteration, as an adaptive/approximate dynamic programming-based approach for optimal control is investigated. The context is optimal control of discrete-time nonlinear dynamics with undiscounted cost functions. Convergence of the learning iterations and uniqueness of the solution to the corresponding Bellman equation are established, leading to the… (More)
DOI: 10.1109/ACC.2016.7526567

5 Figures and Tables

Topics

  • Presentations referencing similar topics