Learning Stochastic Dynamics with Statistics-Informed Neural Network

  title={Learning Stochastic Dynamics with Statistics-Informed Neural Network},
  author={Yuanran Zhu and Yunhao Tang and Changho Kim},
  journal={J. Comput. Phys.},



Machine Learning for Prediction with Missing Dynamics

Learning and meta-learning of stochastic advection–diffusion–reaction systems from sparse measurements

This work first employs the standard PINN and a stochastic version, sPINN, to solve forward and inverse problems governed by a non-linear advection–diffusion–reaction (ADR) equation, and attempts to optimise the hyper-parameters of s PINN by using the Bayesian optimisation method (meta-learning).

Neural Ordinary Differential Equations

This work shows how to scalably backpropagate through any ODE solver, without access to its internal operations, which allows end-to-end training of ODEs within larger models.

Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations

This approach brings continuous-depth Bayesian neural nets to a competitive comparison against discrete-depth alternatives, while inheriting the memory-e finite-parameter training and tunable precision of Neural ODEs.

Neural Stochastic Partial Differential Equations

A novel neural architecture to learn solution operators of PDEs with (possibly stochastic) forcing from partially observed data is introduced and is capable of learning complex spatiotemporal dynamics with better accuracy and using only a modest amount of training data compared to all alternative models.

Learning in Modal Space: Solving Time-Dependent Stochastic PDEs Using Physics-Informed Neural Networks

Two new Physics-Informed Neural Networks (PINNs) are proposed for solving time-dependent SPDEs, namely the NN-DO/BO methods, which incorporate the DO/BO constraints into the loss function with an implicit form instead of generating explicit expressions for the temporal derivatives of the Do/BO modes.

Generalized Langevin Equations for Systems with Local Interactions

We present a new method to approximate the Mori–Zwanzig (MZ) memory integral in generalized Langevin equations describing the evolution of smooth observables in high-dimensional nonlinear systems

Learning interaction kernels in heterogeneous systems of agents from multiple trajectories

This paper establishes a condition for learnability of interaction kernels, and constructs estimators that are guaranteed to converge in a suitable $L^2$ space, at the optimal min-max rate for 1-dimensional nonparametric regression.

Effective Mori-Zwanzig equation for the reduced-order modeling of stochastic systems

It is shown that the obtained semigroup estimates for the EMZ equation can be used to derive prior estimates of the observable statistics for systems in the equilibrium and nonequilibrium state and the effectiveness of the proposed memory kernel approximation methods is shown.

Equation-free Model Reduction in Agent-based Computations: Coarse-grained Bifurcation and Variable-free Rare Event Analysis

The first part of this work describes the large agent number, deterministic limit of the system dynamics by performing numerical bifurcation calculations on a continuum approximation of their model, and demonstrates this “variable-free” approach by constructing a reaction coordinate simply based on the data from the simulation itself.