• Corpus ID: 237386144

Characterizing possible failure modes in physics-informed neural networks

@inproceedings{Krishnapriyan2021CharacterizingPF,
  title={Characterizing possible failure modes in physics-informed neural networks},
  author={Aditi S. Krishnapriyan and Amir Gholami and Shandian Zhe and Robert M. Kirby and Michael W. Mahoney},
  booktitle={Neural Information Processing Systems},
  year={2021}
}
Recent work in scientific machine learning has developed so-called physicsinformed neural network (PINN) models. The typical approach is to incorporate physical domain knowledge as soft constraints on an empirical loss function and use existing machine learning methodologies to train the model. We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena for even slightly more complex… 

Figures and Tables from this paper

Investigating and Mitigating Failure Modes in Physics-informed Neural Networks (PINNs)

This paper proposes a novel method that bypasses the calculation of high-order PDE operators and mitigates the contamination of backpropagating gradients and applies this method to solve several challenging benchmark problems governed by linear and non-linear PDEs.

Scientific Machine Learning through Physics-Informed Neural Networks: Where we are and What's next

This article provides a comprehensive review of the literature on PINNs and indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures.

Lagrangian PINNs: A causality-conforming solution to failure modes of physics-informed neural networks

It is demonstrated that the challenge of training persists even when the boundary conditions are strictly enforced, and the loss landscapes of LPINNs are less sensitive to the so–called “complexity” of the problems, compared to those in the traditional PINNs in the Eulerian framework.

Transfer learning based physics-informed neural networks for solving inverse problems in tunneling

A multi-task learning method is presented to improve the training stability of PINNs for linear elastic problems, and the homoscedastic uncertainty is introduced as a basis for weighting losses.

Rethinking the Importance of Sampling in Physics-informed Neural Networks

It is hypotheses that training of PINNs rely on successful “propagation” of solution from initial and/or boundary condition points to interior points, and PINNs with poor sampling strategies can get stuck at trivial solutions if there are propagation failures, and an extension of Evo to respect the principle of causality while solving time-dependent PDEs is provided.

Respecting causality is all you need for training physics-informed neural networks

This work proposes a simple re-formulation of PINNs loss functions that can explicitly account for physical causality during model training, and demonstrates that this simple modification alone is enough to introduce significant accuracy improvements, as well as a practical quantitative mechanism for assessing the convergence of a PINNs model.

Transfer learning based physics-informed neural networks for solving inverse problems in engineering structures under different loading scenarios

This paper presents a multi-task learning method to improve the training stability of PINNs for linear elastic inverse problems, and the homoscedastic uncertainty is introduced as a basis for weighting losses.

Adaptive Self-supervision Algorithms for Physics-informed Neural Networks

A novel adaptive collocation scheme is proposed which progressively allocates more collocation points to areas where the model is making higher errors (based on the gradient of the loss function in the domain) and which consistently performs on-par or slightly better than vanilla PINN method, even for large collocation point regimes.

Enhanced physics-informed neural networks for hyperelasticity

This paper proposes and develops a physics-informed neural network model that combines the residuals of the strong form and the potential energy, yielding many loss terms contributing to the definition of the loss function to be minimized, using the coefflcient of variation weighting scheme to dynamically and adaptively assign the weight for each loss term in the lossfunction.

Improved Training of Physics-Informed Neural Networks with Model Ensembles

This paper proposes to expand the solution interval gradually to make the PINN converge to the correct solution and uses the ensemble agreement as the criterion for including new points for computing the loss derived from PDEs.
...

References

SHOWING 1-10 OF 45 REFERENCES

Understanding and mitigating gradient pathologies in physics-informed neural networks

This work reviews recent advances in scientific machine learning with a specific focus on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data and proposes a novel neural network architecture that is more resilient to gradient pathologies.

PDE-Net: Learning PDEs from Data

Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.

DeepXDE: A Deep Learning Library for Solving Differential Equations

An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs.

Physics-informed neural networks with hard constraints for inverse design

This work proposes a new deep learning method—physics-informed neural networks with hard constraints (hPINNs)—for solving topology optimization and demonstrates the effectiveness of hPINN for a holography problem in optics and a fluid problem of Stokes flow.

Lagrangian Neural Networks

LNNs are proposed, which can parameterize arbitrary Lagrangians using neural networks, and do not require canonical coordinates, and thus perform well in situations where canonical momenta are unknown or difficult to compute.