• Corpus ID: 235377228

Learning normal form autoencoders for data-driven discovery of universal, parameter-dependent governing equations

@article{Kalia2021LearningNF,
  title={Learning normal form autoencoders for data-driven discovery of universal, parameter-dependent governing equations},
  author={Manu Kalia and Steven L. Brunton and Hil G. E. Meijer and Christoph Brune and J. Nathan Kutz},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.05102}
}
Complex systems manifest a small number of instabilities and bifurcations that are canonical in nature, resulting in universal pattern forming characteristics as a function of some parametric dependence. Such parametric instabilities are mathematically characterized by their universal unfoldings, or normal form dynamics, whereby a parsimonious model can be used to represent the dynamics. Although center-manifold theory guarantees the existence of such low-dimensional normal forms, finding them… 

Figures and Tables from this paper

Discovering Governing Equations from Partial Measurements with Deep Delay Autoencoders
TLDR
It is shown that it is possible to simultaneously learn a closedform model and the associated coordinate system for partially observed dynamics, and this framework combines deep learning to uncover effective coordinates and the sparse identification of nonlinear dynamics (SINDy) for interpretable modeling.
A toolkit for data-driven discovery of governing equations in high-noise regimes
TLDR
An extensive toolkit of methods for circumventing the deleterious effects of noise in the context of the sparse identification of nonlinear dynamics (SINDy) framework is described and a technique that uses linear dependencies among functionals to transform a discovered model into an equivalent form that is closest to the true model is described, enabling more accurate assessment of a discovery method’s accuracy.
Ensemble-SINDy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control
TLDR
This work leverages the statistical approach of bootstrap aggregating (bagging) to robustify the sparse identification of nonlinear dynamics (SINDy) algorithm and shows that ensemble statistics from E-Sindy can be exploited for active learning and improved model predictive control.
An artificial neural network approach to bifurcating phenomena in computational fluid dynamics
TLDR
This work studies the NavierStokes equations describing the Coanda effect in a channel and the lid driven triangular cavity flow, in a physical/geometrical multi-parametrized setting, and proposes a reduced manifold-based bifurcation diagram for a non-intrusive recovery of the critical points evolution.

References

SHOWING 1-10 OF 38 REFERENCES
Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
TLDR
The ability of the method to significantly outperform even the optimal linear-subspace ROM on benchmark advection-dominated problems is demonstrated, thereby demonstrating the method's ability to overcome the intrinsic $n$-width limitations of linear subspaces.
Manifold learning for parameter reduction
TLDR
This work explores systematic, data-driven parameter reduction by means of effective parameter identification, starting from current nonlinear manifoldlearning techniques enabling state space reduction.
Reconstruction of normal forms by learning informed observation geometries from data
TLDR
A geometric/analytic learning algorithm capable of creating minimal descriptions of parametrically dependent unknown nonlinear dynamical systems and an informed observation geometry that enables us to formulate models without first principles as well as without closed form equations are discussed.
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
Abstract We introduce physics-informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear
Manifold learning for organizing unstructured sets of process observations.
TLDR
This paper uses manifold learning to organize unstructured ensembles of observations ("trials") of a system's response surface, and demonstrates how this observation-based reconstruction naturally leads to informative transport maps between the input parameter space and output/state variable spaces.
Discovering governing equations from data by sparse identification of nonlinear dynamical systems
TLDR
This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning and using sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data.
Dynamic mode decomposition - data-driven modeling of complex systems
TLDR
This first book to address the DMD algorithm presents a pedagogical and comprehensive approach to all aspects of DMD currently developed or under development, and blends theoretical development, example codes, and applications to showcase the theory and its many innovations and uses.
Data-driven discovery of coordinates and governing equations
TLDR
A custom deep autoencoder network is designed to discover a coordinate transformation into a reduced space where the dynamics may be sparsely represented, and the governing equations and the associated coordinate system are simultaneously learned.
Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning
TLDR
Boltzmann generators are trained on the energy function of a many-body system and learn to provide unbiased, one-shot samples from its equilibrium state and can be trained to directly generate independent samples of low-energy structures of condensed-matter systems and protein molecules.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
...
1
2
3
4
...