Infinite - horizon Linear - Quadratic Control by Forward Propagation of the Differential Riccati Equation

Abstract

o ne of the foundational principles of optimal control theory is that optimal control laws are propagated backward in time. For linear-quadratic control, this means that the solution of the Riccati equation must be obtained from backward integration from a final-time condition. These features are a direct consequence of the trans-versality conditions of optimal control, which imply that a free final state corresponds to a fixed final adjoint state [1], [2]. In addition, the principle of dynamic programming and the associated Hamilton–Jacobi–Bellman equation is an inherently backward-propagating methodology [3]. The need for backward propagation means that, in practice , the control law must be computed in advance, stored, and then implemented forward in time. The control law may be either open loop or closed loop (as in the linear-quadratic case) but, in both cases, must be computed in advance. Fortunately, the dual case of optimal observers, such as the Kalman filter, is based on forward propagation of the error covariance and thus is more amenable to practical implementation. For linear time-invariant (LTI) plants, a practical subop-timal solution is to implement the asymptotic control law based on the algebraic Riccati equation (ARE). For plants with linear time-varying (LTV) dynamics, perhaps arising from the linearization of a nonlinear plant about a specified trajectory, the main drawback of backward propagation is the fact that the future dynamics of the plant must be known. To circumvent this requirement, at least partially, various forward-propagating control laws have been developed , such as receding-horizon control and model predic-tive control [4]–[7]. Although these techniques require that the future dynamics of the plant be known, the control law is determined over a limited horizon, and thus the user can tailor the control law based on the available modeling information. Of course, all such control laws are subopti-mal over the entire horizon. An alternative approach to linear-quadratic control is to modify the sign of the Riccati equation and integrate forward , in analogy with the Kalman filter. This approach, which is described in [8] and [9], requires knowledge of the dynamics at only the present time. As shown in [9], stability is guaranteed for plants with symmetric closed-loop dynamics as well as for plants with sufficiently fast dynamics. However, a proof of stability for larger classes of plants remains open. Finally, the reinforcement learning approach of [10] is also based on forward integration, as is the " cost-to-come " …

18 Figures and Tables

Cite this paper

@inproceedings{Prach2015InfiniteH, title={Infinite - horizon Linear - Quadratic Control by Forward Propagation of the Differential Riccati Equation}, author={Anna Prach and Ozan Tekinalp and Dennis S. Bernstein}, year={2015} }