# NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations

@article{Zubov2021NeuralPDEAP, title={NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations}, author={Kirill Zubov and Zoe McCarthy and Yingbo Ma and Francesco Calisto and Valerio Pagliarino and Simone Azeglio and Luca Bottero and Emmanuel Luj'an and Valentin Sulzer and Ashutosh Bharambe and Nand Vinchhi and Kaushik Balakrishnan and Devesh Upadhyay and Chris Rackauckas}, journal={ArXiv}, year={2021}, volume={abs/2107.09443} }

Physics-informed neural networks (PINNs) are an increasingly powerful way to solve partial differential equations, generate digital twins, and create neural surrogates of physical models. In this manuscript we detail the various methodologies of PINNs and showcase the various types of problems a PINN software can solve. We then detail the inner workings of NeuralPDE.jl and show how a formulation structured around numerical quadrature gives rise to new loss functions which allow for adaptivity… Expand

#### Figures and Tables from this paper

figure 1 table 1 figure 2 table 2 figure 3 table 3 figure 4 figure 5 figure 6 figure 7 figure 8 figure 9 figure 10 figure 11 figure 12 figure 13 figure 14 figure 15 figure 16 figure 17 figure 18 figure 19 figure 20 figure 21 figure 22 figure 23 figure 24 figure 26 figure 27 figure 28 figure 29 figure 30 figure 32 figure 34 figure 35 figure 36 figure 37 figure 38 figure 39 figure 40 figure 41 figure 42 figure 43 figure 44 figure 45 figure 46 figure 47 figure 48 figure 49 figure 50 figure 51 figure 52 figure 53

#### References

SHOWING 1-10 OF 59 REFERENCES

DeepXDE: A Deep Learning Library for Solving Differential Equations

- Computer Science, Physics
- AAAI Spring Symposium: MLPS
- 2020

An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. Expand

Understanding and mitigating gradient pathologies in physics-informed neural networks

- Computer Science, Mathematics
- SIAM J. Sci. Comput.
- 2021

This work reviews recent advances in scientific machine learning with a specific focus on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data and proposes a novel neural network architecture that is more resilient to gradient pathologies. Expand

Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems

- Mathematics, Physics
- J. Comput. Phys.
- 2019

A new method is proposed with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i.e., the parametric uncertainty and the approximation uncertainty, which can be readily applied to other types of stochastic PDEs in multi-dimensions. Expand

Generalized Physics-Informed Learning through Language-Wide Differentiable Programming

- Computer Science
- AAAI Spring Symposium: MLPS
- 2020

This manuscript develops an infrastructure for incorporating deep learning into existing scientific computing code through Differentiable Programming (∂P), and describes a ∂P system that is able to take gradients of full Julia programs, making Automatic Differentiation a first class language feature and compatibility with deep learning pervasive. Expand

Solving high-dimensional partial differential equations using deep learning

- Mathematics, Computer Science
- Proceedings of the National Academy of Sciences
- 2018

A deep learning-based approach that can handle general high-dimensional parabolic PDEs using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Expand

A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations

- Mathematics, Computer Science
- 2019

It is proved for the first time that in the case of semilinear heat equations with gradient-independent nonlinearities that the numbers of parameters of the employed deep neural networks grow at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy. Expand

High-performance symbolic-numerics via multiple dispatch

- Computer Science
- ArXiv
- 2021

This work details an underlying abstract term interface which allows for speed without sacrificing generality, and shows that by formalizing a generic API on actions independent of implementation, it can retroactively add optimized data structures to the system without changing the pre-existing term rewriters. Expand

DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia

- Computer Science
- 2017

DifferentialEquations.jl offers a unified user interface to solve and analyze various forms of differential equations while not sacrificing features or performance, and is an algorithm testing and benchmarking suite which is feature-rich and highly performant. Expand

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

- Mathematics, Computer Science
- ICML
- 2016

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand

Cuba - a library for multidimensional numerical integration

- Computer Science, Physics
- Comput. Phys. Commun.
- 2005

The Cuba library provides new implementations of four general-purpose multidimensional integration algorithms: Vegas, Suave, Divonne, and Cuhre, which can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Expand