Corpus ID: 236134289

NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations

  title={NeuralPDE: Automating Physics-Informed Neural Networks (PINNs) with Error Approximations},
  author={Kirill Zubov and Zoe McCarthy and Yingbo Ma and Francesco Calisto and Valerio Pagliarino and Simone Azeglio and Luca Bottero and Emmanuel Luj'an and Valentin Sulzer and Ashutosh Bharambe and Nand Vinchhi and Kaushik Balakrishnan and Devesh Upadhyay and Chris Rackauckas},
Physics-informed neural networks (PINNs) are an increasingly powerful way to solve partial differential equations, generate digital twins, and create neural surrogates of physical models. In this manuscript we detail the various methodologies of PINNs and showcase the various types of problems a PINN software can solve. We then detail the inner workings of NeuralPDE.jl and show how a formulation structured around numerical quadrature gives rise to new loss functions which allow for adaptivity… Expand


DeepXDE: A Deep Learning Library for Solving Differential Equations
An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. Expand
Understanding and mitigating gradient pathologies in physics-informed neural networks
This work reviews recent advances in scientific machine learning with a specific focus on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data and proposes a novel neural network architecture that is more resilient to gradient pathologies. Expand
Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems
A new method is proposed with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i.e., the parametric uncertainty and the approximation uncertainty, which can be readily applied to other types of stochastic PDEs in multi-dimensions. Expand
Generalized Physics-Informed Learning through Language-Wide Differentiable Programming
This manuscript develops an infrastructure for incorporating deep learning into existing scientific computing code through Differentiable Programming (∂P), and describes a ∂P system that is able to take gradients of full Julia programs, making Automatic Differentiation a first class language feature and compatibility with deep learning pervasive. Expand
Solving high-dimensional partial differential equations using deep learning
A deep learning-based approach that can handle general high-dimensional parabolic PDEs using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Expand
A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
It is proved for the first time that in the case of semilinear heat equations with gradient-independent nonlinearities that the numbers of parameters of the employed deep neural networks grow at most polynomially in both the PDE dimension and the reciprocal of the prescribed approximation accuracy. Expand
High-performance symbolic-numerics via multiple dispatch
This work details an underlying abstract term interface which allows for speed without sacrificing generality, and shows that by formalizing a generic API on actions independent of implementation, it can retroactively add optimized data structures to the system without changing the pre-existing term rewriters. Expand
DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia
DifferentialEquations.jl offers a unified user interface to solve and analyze various forms of differential equations while not sacrificing features or performance, and is an algorithm testing and benchmarking suite which is feature-rich and highly performant. Expand
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand
Cuba - a library for multidimensional numerical integration
  • T. Hahn
  • Computer Science, Physics
  • Comput. Phys. Commun.
  • 2005
The Cuba library provides new implementations of four general-purpose multidimensional integration algorithms: Vegas, Suave, Divonne, and Cuhre, which can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Expand