• Corpus ID: 209444422

On Simulation and Trajectory Prediction with Gaussian Process Dynamics

  title={On Simulation and Trajectory Prediction with Gaussian Process Dynamics},
  author={Lukas Hewing and Elena Arcari and Lukas P. Fr{\"o}hlich and Melanie Nicole Zeilinger},
Established techniques for simulation and prediction with Gaussian process (GP) dynamics often implicitly make use of an independence assumption on successive function evaluations of the dynamics model. This can result in significant error and underestimation of the prediction uncertainty, potentially leading to failures in safety-critical applications. This paper discusses methods that explicitly take the correlation of successive function evaluations into account. We first describe two… 
Learning Accurate Long-term Dynamics for Model-based Reinforcement Learning
A new parametrization to supervised learning on state-action data to stably predict at longer horizons is proposed -- that is, a trajectory-based model that takes an initial state, a future time index, and control parameters as inputs, and predicts the state at the future time.
Gaussian Process for Trajectories
This chapter describes Gaussian processes as an interpolation technique for geospatial trajectories as it models measurements of a trajectory as coming from a multidimensional Gaussian, and it produces for each timestamp a Gaussian distribution as a prediction.
The Impact of Data on the Stability of Learning-Based Control
This paper proposes a Lyapunov-based measure for quantifying the impact of data on the certifiable control performance by modeling unknown system dynamics through Gaussian processes, and determines the interrelation between model uncertainty and satisfaction of stability conditions.
Structured learning of safety guarantees for the control of uncertain dynamical systems
This work proposes the safe uncertainty learning principle, and argues that the learning must be properly structured to preserve safety guarantees, and offers a way to evaluate if machine learning preserves safety guarantees during the control of uncertain dynamical systems.
The Value of Data in Learning-Based Control for Training Subset Selection
This paper presents a measure to quantify the value of data within the context of a predefined control task, applicable to a wide variety of unknown nonlinear systems that are to be controlled by a generic learning-based control law.
Shallow Representation is Deep: Learning Uncertainty-aware and Worst-case Random Feature Dynamics
It is shown that finding worst-case dynamics realizations using Pontryagin’s minimum principle is equivalent to performing the Frank-Wolfe algorithm on the deep net, and the whole dynamical system is viewed as a multi-layer neural network.
Chance Constrained Policy Optimization for Process Control and Optimization
A chance constrained policy optimization (CCPO) algorithm which guarantees the satisfaction of joint chance constraints with a high probability - which is crucial for safety critical tasks.
Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
This paper proposes a practical optimistic-exploration algorithm, which enlarges the input space with hallucinated inputs that can exert as much control as the epistemic uncertainty in the model affords, and shows how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models.
Sampling-based Reachability Analysis: A Random Set Theory Approach with Adversarial Sampling
A simple yet effective sampling-based approach to perform reachability analysis for arbitrary dynamical systems by using random set theory to give a rigorous interpretation of the method, and proving that it returns sets which are guaranteed to converge to the convex hull of the true reachable sets.


Cautious Model Predictive Control Using Gaussian Process Regression
This work describes a principled way of formulating the chance-constrained MPC problem, which takes into account residual uncertainties provided by the GP model to enable cautious control and presents a model predictive control approach that integrates a nominal system with an additive nonlinear part of the dynamics modeled as a GP.
Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models
A new variational inference scheme for dynamical systems whose transition function is modelled by a Gaussian process, which gives better predictive performance and more calibrated estimates of the transition function, yet maintains the same time and space complexities as mean-field methods.
Stability of Controllers for Gaussian Process Forward Models
This work provides a stability analysis tool for controllers acting on dynamics represented by Gaussian processes, and considers arbitrary Markovian control policies and system dynamics given as (i) the mean of a GP, and (ii) the full GP distribution.
Equilibrium distributions and stability analysis of Gaussian Process State Space Models
The computation of equilibrium distributions is based on the numerical solution of a Fredholm integral equation of the second kind and is suitable for any covariance function and it is shown that the GP-SSM with squared exponential covariancefunction is always mean square bounded and there exists a set which is positive recurrent.
Scenario-based Optimal Control for Gaussian Process State Space Models
This paper introduces how scenarios are sampled from a Gaussian process and utilizes them in a differential dynamic programming approach to solve an optimal control problem and derives probabilistic performance guarantees using results from robust convex optimization.
Gaussian Process Priors with Uncertain Inputs - Application to Multiple-Step Ahead Time Series Forecasting
This paper shows how an analytical Gaussian approximation can formally incorporate the uncertainty about intermediate regressor values, thus updating the uncertainty on the current prediction of the multi-step ahead prediction in time series analysis.
Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control
This work proposes a model-based RL framework based on probabilistic Model Predictive Control based on Gaussian Processes to incorporate model uncertainty into long-term predictions, thereby, reducing the impact of model errors and provides theoretical guarantees for first-order optimality in the GP-based transition models with deterministic approximate inference for long- term planning.
Prediction under Uncertainty in Sparse Spectrum Gaussian Processes with Applications to Filtering and Control
This work proposes two analytic moment-based approaches with closed-form expressions for SSGP regression with uncertain inputs that are more general and scalable than their standard GP counterparts, and are naturally applicable to multi-step prediction or uncertainty propagation.
Gaussian process dynamic programming
This article introduces Gaussian process dynamic programming (GPDP), an approximate value function-based RL algorithm, and proposes to learn probabilistic models of the a priori unknown transition dynamics and the value functions on the fly.
Nonlinear model predictive control with explicit back-offs for Gaussian process state space models
This paper proposes to sample possible plant models according to the GP and calculate explicit back-offs for constraint tightening using closed-loop simulations offline, and shows how the method can account for updating the GP plant model using available online measurements.