• Corpus ID: 237513743

Neural network optimal feedback control with enhanced closed loop stability

@article{NakamuraZimmerer2021NeuralNO,
  title={Neural network optimal feedback control with enhanced closed loop stability},
  author={Tenavi Nakamura-Zimmerer and Qi Gong and Wei Kang},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.07466}
}
Recent research has shown that supervised learning can be an effective tool for designing optimal feedback controllers for high-dimensional nonlinear dynamic systems. But the behavior of these neural network (NN) controllers is still not well understood. In this paper we use numerical simulations to demonstrate that typical test accuracy metrics do not effectively capture the ability of an NN controller to stabilize a system. In particular, some NNs with high test accuracy can fail to stabilize… 

Figures from this paper

Neural Network Optimal Feedback Control with Guaranteed Local Stability
TLDR
Several novel NN architectures are proposed, which show guarantee local stability while retaining the semi-global approximation capacity to learn the optimal feedback policy, and are found to be near-optimal in testing.
Empowering Optimal Control with Machine Learning: A Perspective from Model Predictive Control
TLDR
This paper takes the model predictive control, a popular optimal control method, as the primary example to survey recent progress that leverages machine learning techniques to empower optimal control solvers.
A Machine Learning Enhanced Algorithm for the Optimal Landing Problem
TLDR
A machine learning enhanced algorithm for solving the optimal landing problem using Pontryagin’s minimum principle and a space-marching technique to provide good initial guesses for the boundary value problem solver is proposed.

References

SHOWING 1-10 OF 41 REFERENCES
Semiglobal optimal feedback stabilization of autonomous systems via deep neural network approximation
TLDR
A learning approach for optimal feedback gains for nonlinear continuous time control systems is proposed and the existence and convergence of optimal stabilizing neural network feedback controllers are proved.
On the Stability Analysis of Deep Neural Network Representations of an Optimal State Feedback
TLDR
This article considers the stability of nonlinear systems controlled by such a network representation of the optimal feedback and proposes a novel method based on differential algebra techniques to study the robustness of a nominal trajectory with respect to perturbations of the initial conditions.
Practical stabilization through real-time optimal control
TLDR
This paper proposes a solution for feedback implementations through a domain transformation technique and a Radau based pseudospectral method for real-time optimal control of infinite-horizon, nonlinear, optimal, feedback control.
QRnet: Optimal Regulator Design With LQR-Augmented Neural Networks
TLDR
The proposed approach leverages physics-informed machine learning to solve high-dimensional Hamilton-Jacobi-Bellman equations arising in optimal feedback control and augment linear quadratic regulators with neural networks to handle nonlinearities.
Learning the optimal state-feedback via supervised imitation learning
TLDR
This work describes in detail the best learning pipeline that is able to approximate via deep neural networks the state-feedback map to a very high accuracy, and introduces the use of the softplus activation function in the hidden units of neural networks showing that it results in a smoother control profile whilst retaining the benefits of rectifiers.
A Neural Network Approach for High-Dimensional Optimal Control
TLDR
A neural network approach for solving high-dimensional optimal control problems arising in real-time applications by fusing the Pontryagin Maximum Principle and Hamilton-Jacobi-Bellman approaches and parameterizing the value function with a neural network is proposed.
Aggressive Online Control of a Quadrotor via Deep Network Representations of Optimality Principles
TLDR
A deep neural network is made use of to directly map the robot states to control actions and it is shown that G&CNets lead to significantly faster trajectory execution due to the less restrictive nature of the allowed state-to-input mappings.
Least Squares Solutions of the HJB Equation With Neural Network Value-Function Approximators
TLDR
An empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function with the assumption of stochastic dynamics.
Adaptive Deep Learning for High Dimensional Hamilton-Jacobi-Bellman Equations
TLDR
A data-driven method to approximate semi-global solutions to HJB equations for general high dimensional nonlinear systems and compute optimal feedback controls in real-time with neural networks trained on data generated independently of any state space discretization is proposed.
...
...