Adaptive Deep Learning for High Dimensional Hamilton-Jacobi-Bellman Equations

@article{NakamuraZimmerer2021AdaptiveDL,
  title={Adaptive Deep Learning for High Dimensional Hamilton-Jacobi-Bellman Equations},
  author={Tenavi Nakamura-Zimmerer and Qi Gong and Wei Kang},
  journal={SIAM J. Sci. Comput.},
  year={2021},
  volume={43},
  pages={A1221-A1247}
}
Computing optimal feedback controls for nonlinear systems generally requires solving Hamilton-Jacobi-Bellman (HJB) equations, which, in high dimensions, are notoriously difficult. Existing strategies for high dimensional problems generally rely on specific, restrictive problem structures, or are valid only locally around some nominal trajectory. In this paper, we propose a data-driven method to approximate semi-global solutions to HJB equations for general high dimensional nonlinear systems and… 

Figures and Tables from this paper

Deep neural network approximations for the stable manifolds of the Hamilton-Jacobi equations
TLDR
This paper rigorously proves that if an approximation is sufficiently close to the exact stable manifold of the HJB equation, then the corresponding control derived from this approximation is near optimal, and proposes a deep learning method to approximate the stable manifolds.
Actor-Critic Method for High Dimensional Static Hamilton-Jacobi-Bellman Partial Differential Equations based on Neural Networks
TLDR
A novel numerical method for high dimensional Hamilton–Jacobi– Bellman (HJB) type elliptic partial differential equations (PDEs) based on neural network parametrization of the value and control functions using stochastic calculus is proposed.
A Neural Network Approach for High-Dimensional Optimal Control
TLDR
A neural network approach for solving high-dimensional optimal control problems arising in real-time applications by fusing the Pontryagin Maximum Principle and Hamilton-Jacobi-Bellman approaches and parameterizing the value function with a neural network is proposed.
Deep neural network approximation for high-dimensional parabolic Hamilton-Jacobi-Bellman equations
TLDR
It is shown that for HJB equations that arise in the context of the optimal control of certain Markov processes the solution can be approximated by deep neural networks without incurring the curse of dimension.
A Neural Network Approach for Real-Time High-Dimensional Optimal Control
TLDR
A neural network approach for solving high-dimensional optimal control problems arising in real-time applications that fuse the HamiltonJacobi-Bellman (HJB) and Pontryagin Maximum Principle approaches by parameterizing the value function with an NN, and empirically observe that the number of parameters in the approach scales linearly with the dimension of the control problem, thereby mitigating the curse of dimensionality.
Approximating optimal feedback controllers of finite horizon control problems using hierarchical tensor formats
TLDR
A linear error propagation with respect to the time discretization is proved and numerical evidence is given by controlling a diffusion equation with unstable reaction term and an Allen-Kahn equation.
Data-Driven Recovery of Optimal Feedback Laws through Optimality Conditions and Sparse Polynomial Regression
TLDR
An extended set of low and high-dimensional numerical tests in nonlinear optimal control reveal that enriching the dataset with gradient information reduces the number of training samples, and that the sparse polynomial regression consistently yields a feedback law of lower complexity.
Neural Network Optimal Feedback Control with Guaranteed Local Stability
TLDR
Several novel NN architectures are proposed, which show guarantee local stability while retaining the semi-global approximation capacity to learn the optimal feedback policy, and are found to be near-optimal in testing.
A Tensor Decomposition Approach for High-Dimensional Hamilton-Jacobi-Bellman Equations
TLDR
The proposed method combines a tensor train approximation for the value function together with a Newton-like iterative method for the solution of the resulting nonlinear system, solving Hamilton-Jacobi equations with more than 100 dimensions at modest cost.
Data-Driven Computational Methods for the Domain of Attraction and Zubov's Equation
TLDR
It is proved that a neural network approximation exists for the Lyapunov function of power systems such that the approximation error is a cubic polynomial of the number of generators.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 44 REFERENCES
Using Neural Networks to Compute Approximate and Guaranteed Feasible Hamilton-Jacobi-Bellman PDE Solutions
TLDR
An algorithm that leverages a neural network to approximate the value function of Hamilton-Jacobi-Bellman partial differential equations (HJB PDE) generates near optimal controls which are guaranteed to successfully drive the system to a target state.
Overcoming the curse of dimensionality: Solving high-dimensional partial differential equations using deep learning
TLDR
A deep learning-based approach that can handle general high-dimensional parabolic PDEs is presented, reformulated as a control theory problem and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function.
Mitigating the curse of dimensionality: sparse grid characteristics method for optimal feedback control and HJB equations
TLDR
A new computational method for finding feedback optimal control and solving HJB equations which is able to mitigate the curse of dimensionality is presented and an upper bound for the approximation error is proved.
Forward-Backward Stochastic Neural Networks: Deep Learning of High-dimensional Partial Differential Equations
  • M. Raissi
  • Computer Science, Mathematics
    ArXiv
  • 2018
TLDR
This work approximate the unknown solution by a deep neural network which essentially enables the author to benefit from the merits of automatic differentiation in partial differential equations.
Solving high-dimensional partial differential equations using deep learning
TLDR
A deep learning-based approach that can handle general high-dimensional parabolic PDEs using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function.
Least Squares Solutions of the HJB Equation With Neural Network Value-Function Approximators
TLDR
An empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function with the assumption of stochastic dynamics.
A Causality Free Computational Method for HJB Equations with Application to Rigid Body Satellites
TLDR
A new computational method of solving HJB equations that enjoys the advantage of perfect parallelism on a sparse grid is developed, which is applied to the optimal attitude control of a satellite system using momentum wheels.
Polynomial Approximation of High-Dimensional Hamilton-Jacobi-Bellman Equations and Applications to Feedback Control of Semilinear Parabolic PDEs
A procedure for the numerical approximation of high-dimensional Hamilton--Jacobi--Bellman (HJB) equations associated to optimal feedback control problems for semilinear parabolic equations is
Using Neural Networks for Fast Reachable Set Computations
TLDR
This work proposes an algorithm that leverages a neural network to approximate the minimum time-to-reach function to synthesize controls and shows that this neural network generates near optimal controls which are guaranteed to successfully drive the system to a target state.
...
1
2
3
4
5
...