Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities

@article{Lanthaler2022NonlinearRF,
  title={Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities},
  author={Samuel Lanthaler and Roberto Molinaro and Patrik Hadorn and Siddhartha Mishra},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.01074}
}
A large class of hyperbolic and advection-dominated PDEs can have solutions with discontinuities. This paper investigates, both theoretically and empirically, the operator learning of PDEs with discontinuous solutions. We rigorously prove, in terms of lower approximation bounds, that methods which entail a linear reconstruction step (e.g. DeepONet or PCA-Net) fail to efficiently approximate the solution operator of such PDEs. In contrast, we show that certain methods employing a nonlinear… 

Physics-Informed Neural Operator for Learning Partial Differential Equations

This hybrid approach allows PINO to overcome the limitations of purely data-driven and physics-based methods and incorporate the Fourier neural operator (FNO) architecture which achieves orders-of-magnitude speedup over numerical solvers and also allows us to compute explicit gradients on function spaces efficiently.

BelNet: Basis enhanced learning, a mesh-free neural operator

This work proposes a mesh-free neural operator for solving parametric partial differential equations and constructs part of the network to learn the “basis” functions in the training process, which generalized the networks proposed in [3, 2] to account for di-erences in input and output meshes.

Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations

A new strategy to design specific artificial neural network (ANN) architectures in conjunction with specific ANN initialization schemes which are tailor-made for the particular scientific computing approximation problem under consideration is introduced.

Convolutional Neural Operators

The resulting architecture, termed as convolutional neural operators (CNOs), is shown to significantly outperform competing models on benchmark experiments, paving the way for the design of an alternative robust and accurate framework for learning operators.

References

SHOWING 1-10 OF 46 REFERENCES

On universal approximation and error bounds for Fourier Neural Operators

It is shown that the size of the FNO, approximating operators associated with a Darcy type elliptic PDE and with the incompressible Navier-Stokes equations of fluid dynamics, only increases sub (log)-linearly in terms of the reciprocal of the error.

Fourier Neural Operator for Parametric Partial Differential Equations

This work forms a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture and shows state-of-the-art performance compared to existing neural network methodologies.

Error estimates for DeepOnets: A deep learning framework in infinite dimensions

It is rigorously proved that DeepONets can break this curse of dimensionality and derive almost optimal error bounds with very general affine reconstructors and with random sensor locations as well as bounds on the generalization error, using covering number arguments.

Model Reduction and Neural Networks for Parametric PDEs

A neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation is developed.

Error analysis for deep neural network approximations of parametric hyperbolic conservation laws

It is shown that the approximation error can be made as small as desired with ReLU neural networks that overcome the curse of dimensionality.

Generic bounds on the approximation error for physics-informed (and) operator learning

This work illustrates the general framework by deriving the first rigorous bounds on the approximation error of physics-informed operator learning and by showing that PINNs mitigate the curse of dimensionality in approximating nonlinear parabolic PDEs.

Non-intrusive reduced order modeling of nonlinear problems using neural networks

The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model.

Neural Operator: Learning Maps Between Function Spaces

A generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces, formulated by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators.