Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems

  title={Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems},
  author={Yashwanth Kumar Nakka and Anqi Liu and Guanya Shi and Anima Anandkumar and Yisong Yue and Soon-Jo Chung},
Learning-based control algorithms require data collection with abundant supervision for training. Safe exploration algorithms ensure the safety of this data collection process even when only partial knowledge is available. We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained stochastic optimal control with dynamics learning and feedback control. We derive an iterative convex optimization algorithm that solves an \underline{Info}rmation… 

Figures from this paper

Active Model Learning using Informative Trajectories for Improved Closed-Loop Control on Real Robots

An optimization problem formulation to find an informative trajectory that allows for efficient data collection and model learning is introduced and it is shown that the model learned from informative trajectories generalizes better than the one learned from non-informative trajectories, achieving better tracking performance on different tasks.

Control Barriers in Bayesian Learning of System Dynamics

This paper uses a matrix variate Gaussian process (MVGP) regression approach with efficient covariance factorization to learn the drift and input gain terms of a nonlinear control-affine system and shows that a safe control policy can be synthesized for systems with arbitrary relative degree and probabilistic CLF-CBF constraints by solving a second order cone program.

Imitation Learning for Robust and Safe Real-time Motion Planning: A Contraction Theory Approach

Simulation results for perturbed nonlinear systems show that the LAG-ROS achieves higher control performance and task success rate with faster execution speed for real-time computation, when compared with the existing real- time robust MPC and learning-based feedforward motion planners.

Closing the Closed-Loop Distribution Shift in Safe Imitation Learning

Constrained Mixing Iterative Learning is proposed, a novel on-policy robust imitation learning algorithm that integrates ideas from stochastic mixing iterative learning, constrained policy optimization, and nonlinear robust control that allows for control errors introduced by both the learning task of imitating an expert and by the distribution shift inherent to deviating from the original expert policy.

Limits of Probabilistic Safety Guarantees when Considering Human Uncertainty

This paper shows that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for δ ≤ 10−8, which result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems.

Robust Controller Design for Stochastic Nonlinear Systems via Convex Optimization

This article presents ConVex optimization-based Stochastic steady-state Tracking Error Minimization (CV-STEM), a new state feedback control framework for a class of Itô stochastic nonlinear systems

Certainty Equivalent Perception-Based Control

A uniform error bound on nonparametric kernel regression under a dynamically-achievable dense sampling scheme allows for a finite-time convergence rate on the sub-optimality of using the regressor in closed-loop for waypoint tracking.

Deep model predictive control for a class of nonlinear systems

A control affine nonlinear discrete time system is considerd with mached and bounded state dependent uncertainties. Since the structure of uncertainties is not known, a deep learning based adaptive

Distributionally Robust Learning for Unsupervised Domain Adaptation

A distributionally robust learning method for unsupervised domain adaptation (UDA) that scales to modern computer vision benchmarks, and it is demonstrated that DRST captures shape features more effectively, and reduces the extent of distributional shift during self-training.

Incremental nonlinear stability analysis of stochastic systems perturbed by Lévy noise

The main contributions are two theorems that show that trajectories of the stochastic system arising from distinct initial conditions and noise sample paths are able to exponentially converge to within a steady-state bounded error ball of each other in the mean.



Learning-Based Model Predictive Control for Safe Exploration

This paper presents a learning-based model predictive control scheme that can provide provable high-probability safety guarantees and exploits regularity assumptions on the dynamics in terms of a Gaussian process prior to construct provably accurate confidence intervals on predicted trajectories.

Trajectory Optimization for Chance-Constrained Nonlinear Stochastic Systems

It is proved that in the unconstrained case, the optimal value of the DNOC converges to that of SNOC asymptotically and that any feasible solution of the constrained DNOC is a feasible Solution of the chance-constrained SNOC because the gPC approximation of the random variables converging to the true distribution.

Robust Regression for Safe Exploration in Control

A deep robust regression model is presented that is trained to directly predict the uncertainty bounds for safe exploration and can outperform the conventional Gaussian process (GP) based safe exploration in settings where it is difficult to specify a good GP prior.

Robust Constrained Learning-based NMPC enabling reliable mobile robot path tracking

The goal is to use learning to generate low-uncertainty, non-parametric models in situ that provide safe, conservative control during initial trials when model uncertainty is high and converges to high-performance, optimal control during later trials whenmodel uncertainty is reduced with experience.

Iterative Risk Allocation: A new approach to robust Model Predictive Control with a joint chance constraint

  • M. OnoB. Williams
  • Computer Science
    2008 47th IEEE Conference on Decision and Control
  • 2008
A novel two-stage optimization method for robust model predictive control with Gaussian disturbance and state estimation error is proposed, which yield much smaller suboptimality than ellipsoidal relaxation method while achieving a substantial speedup compared to particle control.

Chance-Constrained Optimal Path Planning With Obstacles

A chance-constrained approach that plans the future probabilistic distribution of the vehicle state so that the probability of failure is below a specified threshold, and introduces a customized solution method that returns almost-optimal solutions along with a hard bound on the level of suboptimality.

Safe Exploration of State and Action Spaces in Reinforcement Learning

The PI-SRL algorithm is introduced, which safely improves suboptimal albeit robust behaviors for continuous state and action control tasks and which efficiently learns from the experience gained from the environment.

Model Predictive Control of Swarms of Spacecraft Using Sequential Convex Programming

Multiple time steps, time-varying collision constraints, and communication requirements are developed to guarantee stability, feasibility, and robustness of the model predictive control-sequential convex programming algorithm.

Stochastic model predictive control with active uncertainty learning: A Survey on dual control

Learning-B ased Nonlinear Model Predictive Control to Improve Vision-Based Mobile Robot Path Tracking

This paper presents a Learning-based Nonlinear Model Predictive Control (LB-NMPC) algorithm to achieve high-performance path tracking in challenging o↵-road terrain through learning. The LB-NMPC