• Corpus ID: 239998591

Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems

@article{Schlaginhaufen2021LearningSD,
  title={Learning Stable Deep Dynamics Models for Partially Observed or Delayed Dynamical Systems},
  author={Andreas Schlaginhaufen and Philippe Wenk and Andreas Krause and Florian D{\"o}rfler},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14296}
}
Learning how complex dynamical systems evolve over time is a key challenge in system identification. For safety critical systems, it is often crucial that the learned model is guaranteed to converge to some equilibrium point. To this end, neural ODEs regularized with neural Lyapunov functions are a promising approach when states are fully observed. For practical applications however, partial observations are the norm. As we will demonstrate, initialization of unobserved augmented states can… 

References

SHOWING 1-10 OF 51 REFERENCES
Learning Stable Deep Dynamics Models
TLDR
It is shown that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics, such as video textures, in a fully end-to-end fashion.
The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamic Systems
TLDR
A method to learn accurate safety certificates for nonlinear, closed-loop dynamical systems by constructing a neural network Lyapunov function and a training algorithm that adapts it to the shape of the largest safe region in the state space.
Optimal Control Via Neural Networks: A Convex Approach
TLDR
This paper explicitly constructing networks that are convex with respect to their inputs are built, and it is shown that these input convex networks can be trained to obtain accurate models of complex physical systems.
Neural Lyapunov Control
TLDR
The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP.
Neural Ordinary Differential Equations
TLDR
This work shows how to scalably backpropagate through any ODE solver, without access to its internal operations, which allows end-to-end training of ODEs within larger models.
Delay Compensation for Nonlinear, Adaptive, and PDE Systems
Preface 1. Introduction Part I. Linear Delay-ODE Cascades 2. Basic Predictor Feedback 3. Predictor Observers 4. Inverse Optimal Redesign 5. Robustness to Delay Mismatch 6. Time-Varying Delay Part II.
Discovering governing equations from data by sparse identification of nonlinear dynamical systems
TLDR
This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning and using sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data.
Augmented Neural ODEs
TLDR
Augmented Neural ODEs are introduced which, in addition to being more expressive models, are empirically more stable, generalize better and have a lower computational cost than Neural Odes.
Input Convex Neural Networks
This paper presents the input convex neural network architecture. These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the
Necessary and Sufficient Razumikhin-Type Conditions for Stability of Delay Difference Equations
TLDR
It is shown that the developed conditions can be verified by solving a linear matrix inequality and indicated that the proposed relaxation of Lyapunov-Razumikhin functions has an important implication for the construction of invariant sets for linear DDEs.
...
1
2
3
4
5
...