• Corpus ID: 229363798

An overview on deep learning-based approximation methods for partial differential equations

@article{Beck2020AnOO,
  title={An overview on deep learning-based approximation methods for partial differential equations},
  author={Christian Beck and Martin Hutzenthaler and Arnulf Jentzen and Benno Kuckuck},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.12348}
}
It is one of the most challenging problems in applied mathematics to approximatively solve high-dimensional partial differential equations (PDEs). Recently, several deep learningbased approximation algorithms for attacking this problem have been proposed and tested numerically on a number of examples of high-dimensional PDEs. This has given rise to a lively field of research in which deep learning-based methods and related Monte Carlo methods are applied to the approximation of high-dimensional… 
Understanding Loss Landscapes of Neural Network Models in Solving Partial Differential Equations
TLDR
A roughness index is proposed to scientifically and quantitatively describe the heuristic concept of "roughness" of landscape around minimizers and this index is based on random projections and the variance of (normalized) total variation for one dimensional projected functions, and it is efficient to compute.
Three ways to solve partial differential equations with neural networks — A review
TLDR
This expository review introduces and contrast three important recent approaches attractive in their simplicity and their suitability for high‐dimensional problems: physics‐informed neural networks, methods based on the Feynman–Kac formula and methodsbased on the solution of backward stochastic differential equations.
Deep ReLU Neural Network Approximation for Stochastic Differential Equations with Jumps
TLDR
It is established that ReLU DNNs can break the curse of dimensionality (CoD for short) for viscosity solutions of linear, possibly degenerate PIDEs corresponding to Markovian jump-diffusion processes.
Full history recursive multilevel Picard approximations for ordinary differential equations with expectations
TLDR
This work shows for every δ > 0 that the proposed MLP approximation algorithm requires only a computational effort of order ε to achieve a root-mean-square error of size ε.
Distributional Offline Continuous-Time Reinforcement Learning with Neural Physics-Informed PDEs (SciPhy RL for DOCTR-L)
TLDR
The proposed algorithm, dubbed ‘SciPhy RL’, reduces DOCTR-L to solving neural PDEs from data and enables a computable approach to the quality control of obtained policies in terms of both expected returns and uncertainties about their values.
Random feature neural networks learn Black-Scholes type PDEs without curse of dimensionality
TLDR
This article investigates the use of random feature neural networks for learning Kolmogorov partial (integro-)differential equations associated to Black-Scholes and more general exponential Lévy models and derives bounds for the prediction error of random neural Networks for learning sufficiently non-degenerate Black- Scholes type models.
Asymptotic-Preserving Neural Networks for Multiscale Time-Dependent Linear Transport Equations
TLDR
This paper develops a neural network for the numerical simulation of time-dependent linear transport equations with diffusive scaling and uncertainties, and builds in a mass conservation mechanism into the loss function, in order to capture the dynamic and multiscale nature of the solutions.
A Deep Gradient Correction Method for Iteratively Solving Linear Systems
We present a novel deep learning approach to approximate the solution of large, sparse, symmetric, positive-definite linear systems of equations. These systems arise from many problems in applied
Credit Valuation Adjustment with Replacement Closeout: Theory and Algorithms
The replacement closeout convention has drawn more and more attention since the 2008 financial crisis. Compared with the conventional risk-free closeout, the replacement closeout convention
Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions
TLDR
Two numerical methods based on machine learning and on Picard iterations, respectively, to approximately solve non-local nonlinear PDEs with Neumann boundary conditions to be solved in high dimensions are proposed.
...
1
2
3
4
...

References

SHOWING 1-10 OF 182 REFERENCES
Space-time deep neural network approximations for high-dimensional partial differential equations
TLDR
The main result of this work proves for every a and b that solutions of certain Kolmogorov PDEs can be approximated by DNNs on the space-time region $[0,T]\times [a,b]^d$ without the curse of dimensionality.
Deep backward schemes for high-dimensional nonlinear PDEs
TLDR
The proposed new machine learning schemes for solving high dimensional nonlinear partial differential equations (PDEs) rely on the classical backward stochastic differential equation (BSDE) representation of PDEs and provide error estimates in terms of the universal approximation of neural networks.
Solving high-dimensional partial differential equations using deep learning
TLDR
A deep learning-based approach that can handle general high-dimensional parabolic PDEs using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function.
Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning
TLDR
It is demonstrated to the reader that studying PDEs as well as control and variational problems in very high dimensions might very well be among the most promising new directions in mathematics and scientific computing in the near future.
Unbiased deep solvers for parametric PDEs
TLDR
Several deep learning algorithms for approximating families of parametric PDE solutions are developed that are robust with respect to quality of the neural network approximation and consequently can be used as a black-box in case only limited a priori information about the underlying problem is available.
DGM: A deep learning algorithm for solving partial differential equations
A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
TLDR
It is proved for the first time that in the case of semilinear heat equations with gradient-independent nonlinearities that the numbers of parameters of the employed deep neural networks grow at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy.
Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory
TLDR
This paper shows that gradient descent can identify a global minimizer of the optimization problem with a well-controlled generalization error in the case of two-layer neural networks in the over-parameterization regime.
Numerical Solution of the Parametric Diffusion Equation by Deep Neural Networks
TLDR
This work finds strong support for the hypothesis that approximation-theoretical effects heavily influence the practical behavior of learning problems in numerical analysis.
...
1
2
3
4
5
...