# Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems

@article{Schlaginhaufen2021LearningSD, title={Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems}, author={Andreas Schlaginhaufen and Philippe Wenk and Andreas Krause and Florian D{\"o}rfler}, journal={ArXiv}, year={2021}, volume={abs/2110.14296} }

Learning how complex dynamical systems evolve over time is a key challenge in system identification. For safety critical systems, it is often crucial that the learned model is guaranteed to converge to some equilibrium point. To this end, neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed. For practical applications however, partial observations are the norm. As we will demonstrate, initialization of unobserved augmented states can…

## Figures and Tables from this paper

## References

SHOWING 1-10 OF 51 REFERENCES

Learning Stable Deep Dynamics Models

- Computer Science, MathematicsNeurIPS
- 2019

It is shown that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics, such as video textures, in a fully end-to-end fashion.

The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamic Systems

- Computer Science, MathematicsCoRL
- 2018

A method to learn accurate safety certificates for nonlinear, closed-loop dynamical systems by constructing a neural network Lyapunov function and a training algorithm that adapts it to the shape of the largest safe region in the state space.

Optimal Control Via Neural Networks: A Convex Approach

- Computer Science, MathematicsICLR
- 2019

This paper explicitly constructing networks that are convex with respect to their inputs are built, and it is shown that these input convex networks can be trained to obtain accurate models of complex physical systems.

Neural Lyapunov Control

- Computer Science, EngineeringNeurIPS
- 2019

The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP.

Neural Ordinary Differential Equations

- Computer Science, MathematicsNeurIPS
- 2018

This work shows how to scalably backpropagate through any ODE solver, without access to its internal operations, which allows end-to-end training of ODEs within larger models.

Delay Compensation for Nonlinear, Adaptive, and PDE Systems

- Mathematics
- 2009

Preface 1. Introduction Part I. Linear Delay-ODE Cascades 2. Basic Predictor Feedback 3. Predictor Observers 4. Inverse Optimal Redesign 5. Robustness to Delay Mismatch 6. Time-Varying Delay Part II.…

Discovering governing equations from data by sparse identification of nonlinear dynamical systems

- Mathematics, MedicineProceedings of the National Academy of Sciences
- 2016

This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning and using sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data.

Augmented Neural ODEs

- Computer Science, MathematicsNeurIPS
- 2019

Augmented Neural ODEs are introduced which, in addition to being more expressive models, are empirically more stable, generalize better and have a lower computational cost than Neural Odes.

Input Convex Neural Networks

- Computer Science, MathematicsICML
- 2017

This paper presents the input convex neural network architecture. These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the…

Necessary and Sufficient Razumikhin-Type Conditions for Stability of Delay Difference Equations

- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 2013

It is shown that the developed conditions can be verified by solving a linear matrix inequality and indicated that the proposed relaxation of Lyapunov-Razumikhin functions has an important implication for the construction of invariant sets for linear DDEs.