• Corpus ID: 234742466

Probabilistic robust linear quadratic regulators with Gaussian processes

  title={Probabilistic robust linear quadratic regulators with Gaussian processes},
  author={Alexander von Rohr and Matthias Neumann-Brosig and Sebastian Trimpe},
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design. While learning-based control has the potential to yield superior performance in demanding applications, robustness to uncertainty remains an important challenge. Since Bayesian methods quantify uncertainty of the learning results, it is natural to incorporate these uncertainties into a robust design. In contrast to most state-ofthe-art… 

Figures from this paper

Improving the Performance of Robust Control through Event-Triggered Learning
This work proposes an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem with rare or slow changes, and designs a statistical test for uncertain systems based on the moment-generating function of the L QR cost.
A Review of Safe Reinforcement Learning: Methods, Theory and Applications
A review of the progress of safe RL from the perspectives of methods, theory and applications, and problems that are crucial for safe RL being deployed in real-world applications, coined as “2H3W” are reviewed.
Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning
This article provides a concise but holistic review of the recent advances made in using machine learning to achieve safe decision-making under uncertainties, with a focus on unifying the language and frameworks used in control theory and reinforcement learning research.
Learning-enhanced robust controller synthesis with rigorous statistical and control-theoretic guarantees
This work presents a general framework for learning-enhanced robust control that allows for systematic integration of prior engineering knowledge, is fully compatible with modern robust control and still comes with rigorous and practically meaningful guarantees.


On the Sample Complexity of the Linear Quadratic Regulator
This paper proposes a multi-stage procedure that estimates a model from a few experimental trials, estimates the error in that model with respect to the truth, and then designs a controller using both the model and uncertainty estimate, and provides end-to-end bounds on the relative error in control cost.
Safe and robust learning control with Gaussian processes
A stabilization task, linearizes the nonlinear, GP-based model around a desired operating point, and solves a convex optimization problem to obtain a linear robust controller that provides robust stability and performance guarantees during learning.
Scenario-based Optimal Control for Gaussian Process State Space Models
This paper introduces how scenarios are sampled from a Gaussian process and utilizes them in a differential dynamic programming approach to solve an optimal control problem and derives probabilistic performance guarantees using results from robust convex optimization.
Certainty Equivalence is Efficient for Linear Quadratic Control
To the best of the knowledge, this result is the first sub-optimality guarantee in the partially observed Linear Quadratic Gaussian (LQG) setting and improves upon recent work by Dean et al. (2017), who present an algorithm achieving a sub- optimality gap linear in the parameter error.
Cautious Model Predictive Control Using Gaussian Process Regression
This work describes a principled way of formulating the chance-constrained MPC problem, which takes into account residual uncertainties provided by the GP model to enable cautious control and presents a model predictive control approach that integrates a nominal system with an additive nonlinear part of the dynamics modeled as a GP.
Numerical Quadrature for Probabilistic Policy Search
The use of numerical quadrature is proposed to overcome the drawback of multi-step-ahead predictions typically become intractable for larger planning horizons and can only poorly be approximated and provide significantly more accurate multi- step- Ahead predictions.
Learning convex bounds for linear quadratic control policy synthesis
This paper presents a method to optimize the expected value of the reward over the posterior distribution of the unknown system parameters, given data, and enjoys reliable local convergence and robust stability guarantees.
Learning and Control Using Gaussian Processes
This paper develops methods for the optimal experiment design (OED) of functional tests to learn models of a physical system, subject to stringent operational constraints and limited availability of the system, and proposes an online method for continuously improving the GP model in closed-loop with a real-time controller.
Feedback linearization using Gaussian processes
Gaussian processes, a Bayesian nonparametric approach, is utilized, to learn a model for feedback linearization, which shows that the resulting system is globally uniformly ultimately bounded.