• Corpus ID: 229156058

Solving for high dimensional committor functions using neural network with online approximation to derivatives

@article{Li2020SolvingFH,
  title={Solving for high dimensional committor functions using neural network with online approximation to derivatives},
  author={Haoya Li and Yuehaw Khoo and Yinuo Ren and Lexing Ying},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.06727}
}
This paper proposes a new method based on neural networks for computing the high-dimensional committor functions that satisfy Fokker-Planck equations. Instead of working with partial differential equations, the new method works with an integral formulation that involves the semigroup of the differential operator. The variational form of the new formulation is then solved by parameterizing the committor function as a neural network. As the main benefit of this new approach, stochastic gradient… 

Figures and Tables from this paper

Committor functions via tensor networks

A semigroup method for high dimensional elliptic PDEs and eigenvalue problems based on neural networks

Learning forecasts of rare stratospheric transitions from short simulations

Rare events arising in nonlinear atmospheric dynamics remain hard to predict and attribute. We address the problem of forecasting rare events in a prototypical example, Sudden Stratospheric Warmings

Statistical analysis of tipping pathways in agent-based models

Agent-based models are a natural choice for modeling complex social systems. In such models simple stochastic interaction rules for a large population of individuals on the microscopic scale can lead

References

SHOWING 1-10 OF 23 REFERENCES

Solving for high-dimensional committor functions using artificial neural networks

A method based on artificial neural network to study the transition between states governed by stochastic processes and aims for numerical schemes for the committor function, the central object of transition path theory, which satisfies a high-dimensional Fokker–Planck equation.

On Lazy Training in Differentiable Programming

This work shows that this "lazy training" phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Learning with rare data: Using active importance sampling to optimize objectives dominated by rare events

This work introduces an approach that combines rare events sampling techniques with neural network optimization to optimize objective functions that are dominated by rare events, and shows that importance sampling reduces the asymptotic variance of the solution to a learning problem, suggesting benefits for generalization.

Computing Committor Functions for the Study of Rare Events Using Deep Learning

A computational approach is introduced that overcomes the curse of dimensionality and the scarcity of transition data and achieves good performance on complex benchmark problems with rough energy landscapes.

Approximate Temporal Difference Learning is a Gradient Descent for Reversible Policies

It is proved that approximate TD is a gradient descent provided the current policy is reversible, and even with nonlinear approximations, which proves stability of TD even with no decay factor and without relying on contractivity of the Bellman operator.

Diffusion Maps, Reduction Coordinates, and Low Dimensional Representation of Stochastic Systems

This paper uses the first few eigenfunctions of the backward Fokker–Planck diffusion operator as a coarse-grained low dimensional representation for the long-term evolution of a stochastic system and shows that they are optimal under a certain mean squared error criterion.

Revisiting the finite temperature string method for the calculation of reaction tubes and free energies.

An improved and simplified version of the finite temperature string (FTS) method that calculates the principal curves associated with the Boltzmann-Gibbs probability distribution of the system via sampling in the Voronoi tessellation whose generating points are the discretization points along this curve.

Stochastic Processes and Applications: Diffusion Processes, the Fokker-Planck and Langevin Equations

Stochastic Processes.- Diffusion Processes.- Introduction to Stochastic Differential Equations.- The Fokker-Planck Equation.- Modelling with Stochastic Differential Equations.- The Langevin

Point Cloud Discretization of Fokker-Planck Operators for Committor Functions

The committor functions provide useful information to the understanding of transitions of a stochastic system between disjoint regions in phase space. In this work, we develop a point cloud discret...