Safe and Near-Optimal Policy Learning for Model Predictive Control using Primal-Dual Neural Networks

@article{Zhang2019SafeAN,
  title={Safe and Near-Optimal Policy Learning for Model Predictive Control using Primal-Dual Neural Networks},
  author={Xiaojing Zhang and Monimoy Bujarbaruah and Francesco Borrelli},
  journal={2019 American Control Conference (ACC)},
  year={2019},
  pages={354-359}
}
In this paper, we propose a novel framework for approximating the explicit MPC law for linear parameter-varying systems using supervised learning. In contrast to most existing approaches, we not only learn the control policy, but also a “certificate policy”, that allows us to estimate the sub-optimality of the learned control policy online, during execution-time. We learn both these policies from data using supervised learning techniques, and also provide a randomized method that allows us to… 

Tables from this paper

Near-Optimal Rapid MPC Using Neural Networks: A Primal-Dual Policy Learning Framework

A novel framework for approximating the MPC policy for linear parameter-varying systems using supervised learning that enables the deployment of MPC on resource-constrained systems and demonstrates a speedup of up to up to $62\times on a desktop computer and on an automotive-grade electronic control unit, while maintaining high control performance.

LEARNING STABLE ADAPTIVE EXPLICIT DIFFERENTIABLE PREDICTIVE CONTROL FOR UNKNOWN LINEAR SYSTEMS

DPC can learn to stabilize constrained neural control policies for systems with unstable dynamics, and it is demonstrated that DPC scales linearly with problem size, compared to exponential scalability of classical explicit MPC based on multiparametric programming.

Learning Constrained Adaptive Differentiable Predictive Control Policies With Guarantees

Differentiable predictive control is presented, a method for learning constrained neural control policies for linear systems with probabilistic performance guarantees that is scalable and computationally more efficient than implicit, explicit, and approximate MPC.

A Sensitivity-Based Data Augmentation Framework for Model Predictive Control Policy Approximation

This technical article aims to address the challenge of generating large training samples to learn the MPC policy by exploiting the parametric sensitivities to cheaply generate additional training samples in the neighborhood of the existing samples.

Learning-based Approximate Model Predictive Control with Guarantees: Joining Neural Networks with Recent Robust MPC

This work bases its work on a novel robust model predictive control scheme guaranteeing constraint satisfaction and recursive feasibility under disturbances, and extends it by practical useful additions, such as robust dynamic set point tracking and the handling of nonlinear constraints in the output function.

Primal-Dual Estimator Learning Method with Feasibility and Near-Optimality Guarantees

This paper proposes a primal-dual framework to learn a stable estimator for linear constrained estimation problems leveraging the moving horizon approach, and is the first learning-based state estimator with feasibility and near- optimality guarantees forlinear constrained systems.

Minimum time learning model predictive control

This work builds on existing LMPC methodologies and it guarantees finite time convergence properties for the closed-loop system and it demonstrates that, for a class of nonlinear system and convex constraints, the convex LMPC formulation guarantees recursive constraint satisfaction and non-decreasing performance.

Primal-dual Estimator Learning: an Offline Constrained Moving Horizon Estimation Method with Feasibility and Near-optimality Guarantees

This paper proposes a primal-dual framework to learn a stable estimator for linear constrained estimation problems leveraging the moving horizon approach, and is likely the first learning-based state estimator with feasibility and near-optimality guarantees forlinear constrained systems.

Deep Learning Explicit Differentiable Predictive Control Laws for Buildings

References

SHOWING 1-10 OF 36 REFERENCES

Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search

This work proposes to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment, and a deep neural network policy is trained, which can successfully control the robot without knowledge of the full state.

Learning a feasible and stabilizing explicit model predictive control law by robust optimization

A new synthesis method for low-complexity suboptimal MPC controllers based on function approximation from randomly chosen point-wise sample values that renders the approach particularly suitable for highly concurrent embedded platforms such as FPGAs.

Learning an Approximate Model Predictive Controller With Guarantees

A robust MPC design is combined with statistical learning bounds, and Hoeffding's Inequality is used to validate that the learned MPC satisfies these bounds with high confidence, and the result is a closed-loop statistical guarantee on stability and constraint satisfaction for thelearned MPC.

Polytopic Approximation of Explicit Model Predictive Controllers

  • C. JonesM. Morari
  • Mathematics, Computer Science
    IEEE Transactions on Automatic Control
  • 2010
This paper compute approximate explicit control laws that trade-off complexity against approximation error for MPC controllers that give rise to convex parametric optimization problems.

Approximating Explicit Model Predictive Control Using Constrained Neural Networks

A modified reinforcement learning policy gradient algorithm is introduced that utilizes knowledge of the system model to efficiently train the neural network and guarantees that the network generates feasible control inputs by projecting onto polytope regions derived from the maximal control invariant set of theSystem.

Predictive Control for Linear and Hybrid Systems

Predictive Control for Linear and Hybrid Systems is an ideal reference for graduate, postgraduate and advanced control practitioners interested in theory and/or implementation aspects of predictive control.

Efficient Representation and Approximation of Model Predictive Control Laws via Deep Learning

We show that artificial neural networks with rectifier units as activation functions can exactly represent the piecewise affine function that results from the formulation of model predictive control

Adaptive MPC for Autonomous Lane Keeping

This paper proposes an Adaptive Robust Model Predictive Control strategy for lateral control in lane keeping problems, where we continuously learn an unknown, but constant steering angle offset