# Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities

@article{Lanthaler2022NonlinearRF, title={Nonlinear Reconstruction for Operator Learning of PDEs with Discontinuities}, author={Samuel Lanthaler and Roberto Molinaro and Patrik Hadorn and Siddhartha Mishra}, journal={ArXiv}, year={2022}, volume={abs/2210.01074} }

A large class of hyperbolic and advection-dominated PDEs can have solutions with discontinuities. This paper investigates, both theoretically and empirically, the operator learning of PDEs with discontinuous solutions. We rigorously prove, in terms of lower approximation bounds, that methods which entail a linear reconstruction step (e.g. DeepONet or PCA-Net) fail to efﬁciently approximate the solution operator of such PDEs. In contrast, we show that certain methods employing a nonlinear…

## 4 Citations

### Physics-Informed Neural Operator for Learning Partial Differential Equations

- Computer ScienceArXiv
- 2021

This hybrid approach allows PINO to overcome the limitations of purely data-driven and physics-based methods and incorporate the Fourier neural operator (FNO) architecture which achieves orders-of-magnitude speedup over numerical solvers and also allows us to compute explicit gradients on function spaces efﬁciently.

### BelNet: Basis enhanced learning, a mesh-free neural operator

- Computer ScienceArXiv
- 2022

This work proposes a mesh-free neural operator for solving parametric partial diﬀerential equations and constructs part of the network to learn the “basis” functions in the training process, which generalized the networks proposed in [3, 2] to account for di-erences in input and output meshes.

### Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations

- Computer Science
- 2023

A new strategy to design specific artificial neural network (ANN) architectures in conjunction with specific ANN initialization schemes which are tailor-made for the particular scientific computing approximation problem under consideration is introduced.

### Convolutional Neural Operators

- Computer Science
- 2023

The resulting architecture, termed as convolutional neural operators (CNOs), is shown to significantly outperform competing models on benchmark experiments, paving the way for the design of an alternative robust and accurate framework for learning operators.

## References

SHOWING 1-10 OF 46 REFERENCES

### On universal approximation and error bounds for Fourier Neural Operators

- Mathematics, Computer ScienceJ. Mach. Learn. Res.
- 2021

It is shown that the size of the FNO, approximating operators associated with a Darcy type elliptic PDE and with the incompressible Navier-Stokes equations of fluid dynamics, only increases sub (log)-linearly in terms of the reciprocal of the error.

### Fourier Neural Operator for Parametric Partial Differential Equations

- Computer ScienceICLR
- 2021

This work forms a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture and shows state-of-the-art performance compared to existing neural network methodologies.

### Approximation rates of DeepONets for learning operators arising from advection-diffusion equations

- Computer Science, MathematicsNeural Networks
- 2022

### Error estimates for DeepOnets: A deep learning framework in infinite dimensions

- Computer Science, MathematicsTransactions of Mathematics and Its Applications
- 2022

It is rigorously proved that DeepONets can break this curse of dimensionality and derive almost optimal error bounds with very general affine reconstructors and with random sensor locations as well as bounds on the generalization error, using covering number arguments.

### Model Reduction and Neural Networks for Parametric PDEs

- Computer Science, MathematicsThe SMAI journal of computational mathematics
- 2021

A neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation is developed.

### Nonlinear reduced basis approximation of parameterized evolution equations via the method of freezing

- Mathematics
- 2013

### Error analysis for deep neural network approximations of parametric hyperbolic conservation laws

- Computer ScienceArXiv
- 2022

It is shown that the approximation error can be made as small as desired with ReLU neural networks that overcome the curse of dimensionality.

### Generic bounds on the approximation error for physics-informed (and) operator learning

- Computer Science, PhysicsArXiv
- 2022

This work illustrates the general framework by deriving the first rigorous bounds on the approximation error of physics-informed operator learning and by showing that PINNs mitigate the curse of dimensionality in approximating nonlinear parabolic PDEs.

### Non-intrusive reduced order modeling of nonlinear problems using neural networks

- Computer Science
- 2017

The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model.

### Neural Operator: Learning Maps Between Function Spaces

- Mathematics, Computer ScienceArXiv
- 2021

A generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces, formulated by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators.