• Corpus ID: 237513743

Neural network optimal feedback control with enhanced closed loop stability

  title={Neural network optimal feedback control with enhanced closed loop stability},
  author={Tenavi Nakamura-Zimmerer and Qi Gong and Wei Kang},
Recent research has shown that supervised learning can be an effective tool for designing optimal feedback controllers for high-dimensional nonlinear dynamic systems. But the behavior of these neural network (NN) controllers is still not well understood. In this paper we use numerical simulations to demonstrate that typical test accuracy metrics do not effectively capture the ability of an NN controller to stabilize a system. In particular, some NNs with high test accuracy can fail to stabilize… 

Figures from this paper

A Machine Learning Enhanced Algorithm for the Optimal Landing Problem
A machine learning enhanced algorithm for solving the optimal landing problem using Pontryagin’s minimum principle and a space-marching technique to provide good initial guesses for the boundary value problem solver is proposed.
Neural Network Optimal Feedback Control with Guaranteed Local Stability
Several novel neural network architectures are proposed, which show guarantee local asymptotic stability while retaining the approximation capacity to learn the optimal feedback policy semi-globally, and are found to be nearly optimal in testing.
Empowering Optimal Control with Machine Learning: A Perspective from Model Predictive Control
This paper takes model predictive control, a popular optimal control method, as the primary example to survey recent progress that leverages machine learning techniques to empower optimal control solvers.


Semiglobal optimal feedback stabilization of autonomous systems via deep neural network approximation
A learning approach for optimal feedback gains for nonlinear continuous time control systems is proposed and the existence and convergence of optimal stabilizing neural network feedback controllers are proved.
On the Stability Analysis of Deep Neural Network Representations of an Optimal State Feedback
This article considers the stability of nonlinear systems controlled by such a network representation of the optimal feedback and proposes a novel method based on differential algebra techniques to study the robustness of a nominal trajectory with respect to perturbations of the initial conditions.
Practical stabilization through real-time optimal control
This paper proposes a solution for feedback implementations through a domain transformation technique and a Radau based pseudospectral method for real-time optimal control of infinite-horizon, nonlinear, optimal, feedback control.
QRnet: Optimal Regulator Design With LQR-Augmented Neural Networks
The proposed approach leverages physics-informed machine learning to solve high-dimensional Hamilton-Jacobi-Bellman equations arising in optimal feedback control and augment linear quadratic regulators with neural networks to handle nonlinearities.
Learning the optimal state-feedback via supervised imitation learning
This work describes in detail the best learning pipeline that is able to approximate via deep neural networks the state-feedback map to a very high accuracy, and introduces the use of the softplus activation function in the hidden units of neural networks showing that it results in a smoother control profile whilst retaining the benefits of rectifiers.
A Neural Network Approach for High-Dimensional Optimal Control
A neural network approach for solving high-dimensional optimal control problems arising in real-time applications by fusing the Pontryagin Maximum Principle and Hamilton-Jacobi-Bellman approaches and parameterizing the value function with a neural network is proposed.
Least Squares Solutions of the HJB Equation With Neural Network Value-Function Approximators
An empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function with the assumption of stochastic dynamics.
Closed-loop structural stability for linear-quadratic optimal systems
  • P. Wong, M. Athans
  • Mathematics
    1976 IEEE Conference on Decision and Control including the 15th Symposium on Adaptive Processes
  • 1976
This paper contains an explicit parametrization of a subclass of linear constant gain feedback maps that will not destabilize an originally open-loop stable system. These results can then be used to
Gradient-augmented Supervised Learning of Optimal Feedback Laws Using State-Dependent Riccati Equations
High-dimensional nonlinear stabilization tests demonstrate that real-time sequential large-scale Algebraic Riccati Equation solvers can be substituted by a suitably trained feedforward neural network.
The Quadratic-Quadratic Regulator Problem: Approximating feedback controls for quadratic-in-state nonlinear systems
This paper describes an algorithm that exploits the structure of the QQR problem that arises when implementing Al’Brekht’s method and produces linear systems with a special structure that can take advantage of modern tensor-based linear solvers.