• Corpus ID: 214802847

On the Convergence and generalization of Physics Informed Neural Networks

  title={On the Convergence and generalization of Physics Informed Neural Networks},
  author={Yeonjong Shin and J{\'e}r{\^o}me Darbon and George Em Karniadakis},
Physics informed neural networks (PINNs) are deep learning based techniques for solving partial differential equations (PDEs). Guided by data and physical laws, PINNs find a neural network that approximates the solution to a system of PDEs. Such a neural network is obtained by minimizing a loss function in which any prior knowledge of PDEs and data are encoded. Despite its remarkable empirical success, there is little theoretical justification for PINNs. In this paper, we establish a… 

Figures from this paper

Sobolev Training for Physics Informed Neural Networks

Inspired by the recent studies that incorporate derivative information for the training of neural networks, a loss function is developed that guides a neural network to reduce the error in the corresponding Sobolev space, making the training substantially efficient.

Error analysis for physics-informed neural networks (PINNs) approximating Kolmogorov PDEs

It is proved that the size of the PINNs and the number of training samples only grow polynomially with the underlying dimension, enabling PINNs to overcome the curse of dimensionality in this context.

Sobolev Training for the Neural Network Solutions of PDEs

This paper develops a loss function that guides a neural network to reduce the error in the corresponding Sobolev space, and provides empirical evidence that shows that the proposed loss function, together with the iterative sampling techniques, performs better in solving high dimensional PDEs.

Learning Physics-Informed Neural Networks without Stacked Back-propagation

A novel approach is developed that can significantly accelerate the training of Physics-Informed Neural Networks by parameterizing the PDE solution by the Gaussian smoothed model and showing that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.

On the Role of Fixed Points of Dynamical Systems in Training Physics-Informed Neural Networks

This paper empirically studies commonly observed training difficulties of Physics-Informed Neural Networks (PINNs) on dynamical systems. Our results indicate that fixed points which are inherent to

Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation

This framework is based on Brenier’s theorem, which reduces the continuous OMT problem to that of solving a nonlinear PDE of Monge-Ampere type whose solution is a convex function.

Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism

A fundamentally new method to train PINNs adaptively, where the adaptation weights are fully trainable, so the neural network learns by itself which regions of the solution are difficult and is forced to focus on them, which is reminiscent of soft multiplicative-mask attention mechanism used in computer vision.

Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs

An abstract formalism is introduced and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error of PINNs approximating solutions of the forward problem for PDEs.

When and why PINNs fail to train: A neural tangent kernel perspective

Learning generative neural networks with physics knowledge

The proposed PhysGNN is a fully differentiable model that allows back-propagation of gradients through both numerical PDE solvers and generative neural networks, and is trained by minimizing the discrete Wasserstein distance between generated and observed probability distributions of the PDE outputs using the stochastic gradient descent method.

Understanding and mitigating gradient pathologies in physics-informed neural networks

This work reviews recent advances in scientific machine learning with a specific focus on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data and proposes a novel neural network architecture that is more resilient to gradient pathologies.

fPINNs: Fractional Physics-Informed Neural Networks

This work extends PINNs to fractional PINNs (fPINNs) to solve space-time fractional advection-diffusion equations (fractional ADEs), and demonstrates their accuracy and effectiveness in solving multi-dimensional forward and inverse problems with forcing terms whose values are only known at randomly scattered spatio-temporal coordinates (black-box forcing terms).

DeepXDE: A Deep Learning Library for Solving Differential Equations

An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs.

Learning in Modal Space: Solving Time-Dependent Stochastic PDEs Using Physics-Informed Neural Networks

Two new Physics-Informed Neural Networks (PINNs) are proposed for solving time-dependent SPDEs, namely the NN-DO/BO methods, which incorporate the DO/BO constraints into the loss function with an implicit form instead of generating explicit expressions for the temporal derivatives of the Do/BO modes.

Solving high-dimensional partial differential equations using deep learning

A deep learning-based approach that can handle general high-dimensional parabolic PDEs using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function.

A unified deep artificial neural network approach to partial differential equations in complex geometries

DGM: A deep learning algorithm for solving partial differential equations