• Corpus ID: 4469098

Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations

@article{Raissi2017PhysicsID,
  title={Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations},
  author={Maziar Raissi and Paris Perdikaris and George Em Karniadakis},
  journal={ArXiv},
  year={2017},
  volume={abs/1711.10566}
}
We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. [] Key Method Depending on whether the available data is scattered in space-time or arranged in fixed temporal snapshots, we introduce two main classes of algorithms, namely continuous time and discrete time models. The effectiveness of our approach is demonstrated using a wide range of…

Figures and Tables from this paper

Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations

  • M. Raissi
  • Computer Science
    J. Mach. Learn. Res.
  • 2018
This work puts forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time by approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks.

Data Driven Solutions and Discoveries in Mechanics Using Physics Informed Neural Network

This study shows that PINN provides an attractive alternative to solve traditional engineering problems and suggests the bright prospect of the physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters.

Data-driven Discovery of Partial Differential Equations for Multiple-Physics Electromagnetic Problem

A deep learning neutral network in combination with sparse regression to solve the hidden governing equations in multiple-physics EM problem and Pareto analysis is also adopted to preserve inversion as precise and simple as possible.

PhICNet: Physics-Incorporated Convolutional Recurrent Neural Networks for Modeling Dynamical Systems

This paper formulate the model PhICNet as a convolutional recurrent neural network which is end-to-end trainable for spatiotemporal evolution prediction of dynamical systems and shows the long-term prediction capability of the model.

Data-Driven Deep Learning of Partial Differential Equations in Modal Space

A Mesh-Free, Physics-Constrained Approach to solve Partial Differential Equations with a Deep Neural Network

A deep, feed-forward, and fully-connected neural network is used to approximate the partial differential equation, where the initial and boundary conditions are either hard or soft assigned, and the resulting physics-informed surrogate model learns to satisfy the differential operator and the initialand boundary conditions.

The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers?

This work evaluates the potential of physics-Informed Neural Networks as linear solvers in the case of the Poisson equation, an omnipresent equation in scientific computing, and proposes hybrid strategies combining old traditional linear solver approaches with new emerging deep-learning techniques.

Physics-Informed Deep-Learning for Scientific Computing

Evaluating the PINN potential to replace or accelerate traditional approaches for solving linear systems, and how to integrate PINN with traditional scientific computing approaches, such as multigrid and Gauss-Seidel methods.

Learning in Modal Space: Solving Time-Dependent Stochastic PDEs Using Physics-Informed Neural Networks

Two new Physics-Informed Neural Networks (PINNs) are proposed for solving time-dependent SPDEs, namely the NN-DO/BO methods, which incorporate the DO/BO constraints into the loss function with an implicit form instead of generating explicit expressions for the temporal derivatives of the Do/BO modes.

Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations

This paper proposes a novel neural network framework, finite difference neural networks (FD-Net), to learn partial differential equations from data, and to iteratively estimate the future dynamical behavior using only a few trainable parameters.
...

References

SHOWING 1-10 OF 23 REFERENCES

Hidden physics models: Machine learning of nonlinear partial differential equations

Inferring solutions of differential equations using noisy multi-fidelity data

Numerical Gaussian Processes for Time-Dependent and Nonlinear Partial Differential Equations

The method circumvents the need for spatial discretization of the differential operators by proper placement of Gaussian process priors and is an attempt to construct structured and data-efficient learning machines, which are explicitly informed by the underlying physics that possibly generated the observed data.

Machine learning of linear differential equations using Gaussian processes

Data-driven discovery of partial differential equations

The sparse regression method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation.

Discovering governing equations from data by sparse identification of nonlinear dynamical systems

This work develops a novel framework to discover governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity techniques and machine learning and using sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data.

Automatic differentiation in machine learning: a survey

By precisely defining the main differentiation techniques and their interrelationships, this work aims to bring clarity to the usage of the terms “autodiff’, “automatic differentiation”, and “symbolic differentiation" as these are encountered more and more in machine learning settings.

Opening the Black Box of Deep Neural Networks via Information

This work demonstrates the effectiveness of the Information-Plane visualization of DNNs and shows that the training time is dramatically reduced when adding more hidden layers, and the main advantage of the hidden layers is computational.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

The Loss Surfaces of Multilayer Networks

It is proved that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.