Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations
@article{Raissi2017PhysicsID, title={Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations}, author={Maziar Raissi and Paris Perdikaris and George Em Karniadakis}, journal={ArXiv}, year={2017}, volume={abs/1711.10566} }
We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. [] Key Method Depending on whether the available data is scattered in space-time or arranged in fixed temporal snapshots, we introduce two main classes of algorithms, namely continuous time and discrete time models. The effectiveness of our approach is demonstrated using a wide range of…
Figures and Tables from this paper
453 Citations
Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations
- Computer ScienceJ. Mach. Learn. Res.
- 2018
This work puts forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time by approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks.
Data Driven Solutions and Discoveries in Mechanics Using Physics Informed Neural Network
- Computer Science, Physics
- 2020
This study shows that PINN provides an attractive alternative to solve traditional engineering problems and suggests the bright prospect of the physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters.
Data-driven Discovery of Partial Differential Equations for Multiple-Physics Electromagnetic Problem
- Computer Science
- 2019
A deep learning neutral network in combination with sparse regression to solve the hidden governing equations in multiple-physics EM problem and Pareto analysis is also adopted to preserve inversion as precise and simple as possible.
PhICNet: Physics-Incorporated Convolutional Recurrent Neural Networks for Modeling Dynamical Systems
- Computer ScienceArXiv
- 2020
This paper formulate the model PhICNet as a convolutional recurrent neural network which is end-to-end trainable for spatiotemporal evolution prediction of dynamical systems and shows the long-term prediction capability of the model.
Data-Driven Deep Learning of Partial Differential Equations in Modal Space
- Mathematics, Computer ScienceJ. Comput. Phys.
- 2020
A Mesh-Free, Physics-Constrained Approach to solve Partial Differential Equations with a Deep Neural Network
- Computer Science
- 2021
A deep, feed-forward, and fully-connected neural network is used to approximate the partial differential equation, where the initial and boundary conditions are either hard or soft assigned, and the resulting physics-informed surrogate model learns to satisfy the differential operator and the initialand boundary conditions.
The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers?
- Computer ScienceFrontiers in Big Data
- 2021
This work evaluates the potential of physics-Informed Neural Networks as linear solvers in the case of the Poisson equation, an omnipresent equation in scientific computing, and proposes hybrid strategies combining old traditional linear solver approaches with new emerging deep-learning techniques.
Physics-Informed Deep-Learning for Scientific Computing
- Computer ScienceArXiv
- 2021
Evaluating the PINN potential to replace or accelerate traditional approaches for solving linear systems, and how to integrate PINN with traditional scientific computing approaches, such as multigrid and Gauss-Seidel methods.
Learning in Modal Space: Solving Time-Dependent Stochastic PDEs Using Physics-Informed Neural Networks
- Computer ScienceSIAM J. Sci. Comput.
- 2020
Two new Physics-Informed Neural Networks (PINNs) are proposed for solving time-dependent SPDEs, namely the NN-DO/BO methods, which incorporate the DO/BO constraints into the loss function with an implicit form instead of generating explicit expressions for the temporal derivatives of the Do/BO modes.
Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations
- Computer Science, Mathematics2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)
- 2020
This paper proposes a novel neural network framework, finite difference neural networks (FD-Net), to learn partial differential equations from data, and to iteratively estimate the future dynamical behavior using only a few trainable parameters.
References
SHOWING 1-10 OF 23 REFERENCES
Hidden physics models: Machine learning of nonlinear partial differential equations
- Computer ScienceJ. Comput. Phys.
- 2018
Inferring solutions of differential equations using noisy multi-fidelity data
- Computer ScienceJ. Comput. Phys.
- 2017
Numerical Gaussian Processes for Time-Dependent and Nonlinear Partial Differential Equations
- Computer Science, MathematicsSIAM J. Sci. Comput.
- 2018
The method circumvents the need for spatial discretization of the differential operators by proper placement of Gaussian process priors and is an attempt to construct structured and data-efficient learning machines, which are explicitly informed by the underlying physics that possibly generated the observed data.
Machine learning of linear differential equations using Gaussian processes
- Mathematics, Computer ScienceJ. Comput. Phys.
- 2017
Data-driven discovery of partial differential equations
- MathematicsScience Advances
- 2017
The sparse regression method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation.
Discovering governing equations from data by sparse identification of nonlinear dynamical systems
- Computer ScienceProceedings of the National Academy of Sciences
- 2016
This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning and using sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data.
Automatic differentiation in machine learning: a survey
- Computer ScienceJ. Mach. Learn. Res.
- 2017
By precisely defining the main differentiation techniques and their interrelationships, this work aims to bring clarity to the usage of the terms “autodiff’, “automatic differentiation”, and “symbolic differentiation" as these are encountered more and more in machine learning settings.
Opening the Black Box of Deep Neural Networks via Information
- Computer ScienceArXiv
- 2017
This work demonstrates the effectiveness of the Information-Plane visualization of DNNs and shows that the training time is dramatically reduced when adding more hidden layers, and the main advantage of the hidden layers is computational.
Adam: A Method for Stochastic Optimization
- Computer ScienceICLR
- 2015
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
The Loss Surfaces of Multilayer Networks
- Computer ScienceAISTATS
- 2015
It is proved that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.