Improving the Performance of Robust Control through Event-Triggered Learning

@article{Rohr2022ImprovingTP,
  title={Improving the Performance of Robust Control through Event-Triggered Learning},
  author={Alexander von Rohr and Friedrich Solowjow and Sebastian Trimpe},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.14252}
}
—Robust controllers ensure stability in feedback loops designed under uncertainty but at the cost of performance. Model uncertainty in time-invariant systems can be reduced by recently proposed learning-based methods, thus improving the performance of robust controllers using data. However, in practice, many systems also exhibit uncertainty in the form of changes over time, e.g., due to weight shifts or wear and tear, leading to decreased performance or instability of the learning-based… 

Figures from this paper

References

SHOWING 1-10 OF 24 REFERENCES

Safe and robust learning control with Gaussian processes

A stabilization task, linearizes the nonlinear, GP-based model around a desired operating point, and solves a convex optimization problem to obtain a linear robust controller that provides robust stability and performance guarantees during learning.

Feedback Linearization Based on Gaussian Processes With Event-Triggered Online Learning

A learning feedback linearizing control law using online closed-loop identification that ensures high data efficiency and thereby reduces computational complexity, which is a major barrier for using Gaussian processes under real-time constraints.

On Controller Tuning with Time-Varying Bayesian Optimization

A novel TVBO forgetting strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes in the objective due to changes to the system dynamics and outperforms the state-of-the-art method in TVBO.

Event-Triggered Learning for Linear Quadratic Control

A structured approach is obtained that decides when model learning is beneficial, by analyzing the probability distribution of the linear quadratic cost and designing a learning trigger that leverages Chernoff bounds.

Probabilistic robust linear quadratic regulators with Gaussian processes

A novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin is presented, based on a recently proposed algorithm for linear quadratic control synthesis and extended by giving probabilism robustness guarantees in the form of credibility bounds for the system’s stability.

On the Sample Complexity of the Linear Quadratic Regulator

This paper proposes a multi-stage procedure that estimates a model from a few experimental trials, estimates the error in that model with respect to the truth, and then designs a controller using both the model and uncertainty estimate, and provides end-to-end bounds on the relative error in control cost.

Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning

This article provides a concise but holistic review of the recent advances made in using machine learning to achieve safe decision-making under uncertainties, with a focus on unifying the language and frameworks used in control theory and reinforcement learning research.

Learning convex bounds for linear quadratic control policy synthesis

This paper presents a method to optimize the expected value of the reward over the posterior distribution of the unknown system parameters, given data, and enjoys reliable local convergence and robust stability guarantees.

Scenario-based Optimal Control for Gaussian Process State Space Models

This paper introduces how scenarios are sampled from a Gaussian process and utilizes them in a differential dynamic programming approach to solve an optimal control problem and derives probabilistic performance guarantees using results from robust convex optimization.