Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative Learning

@article{OLearyRoseberry2022DerivativeInformedNO,
  title={Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative Learning},
  author={Thomas O'Leary-Roseberry and Peng Chen and Umberto Villa and Omar Ghattas},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.10745}
}
. Neural operators have gained significant attention recently due to their ability to approximate high-dimensional parametric maps between function spaces. At present, only parametric function approximation has been addressed in the neural operator literature. In this work we investigate incorporating parametric derivative information in neural operator training; this information can improve function approximations, additionally it can be used to improve the approximation of the derivative with… 
1 Citations

Large-scale Bayesian optimal experimental design with derivative-informed projected neural network

TLDR
To make the evaluation of the EIG tractable, this work approximate the (PDE-based) parameter-to-observable map with a derivative-informed projected neural network (DIPNet) surrogate, which exploits the geom-etry, smoothness, and intrinsic low-dimensionality of the map using a research small and dimension-independent number of PDE solves.

References

SHOWING 1-10 OF 43 REFERENCES

Derivative-Informed Projected Neural Networks for High-Dimensional Parametric Maps Governed by PDEs

Adaptive Projected Residual Networks for Learning Parametric Maps from Sparse Data

TLDR
A universal approximation property of the proposed adaptive projected ResNet framework is proved, which motivates a related iterative algorithm for the ResNet construction.

Derivative-informed projected neural network for large-scale Bayesian optimal experimental design

TLDR
DIPNet is deployed within a greedy algorithm-based solution of the OED problem such that no further PDE solves are required, and the EIG approximation error is analyzed in terms of the generalization error of the DIPNet.

Nonlinear dimension reduction for surrogate modeling using gradient information

TLDR
It is shown that building a nonlinear feature map g can permit more accurate approximation of u than a linear g, for the same input data set.

Neural Operator: Learning Maps Between Function Spaces

TLDR
A generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces, formulated by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators.

HESSIAN-BASED SAMPLING FOR HIGH-DIMENSIONAL MODEL REDUCTION

TLDR
This work develops a Hessian-based sampling method for the construction of goal-oriented reduced order models with high-dimensional parameter inputs that leads to much smaller errors of the reduced basis approximation for the QoI compared to a random sampling for a diffusion equation with random input obeying either uniform or Gaussian distributions.

Greedy inference with structure-exploiting lazy maps

TLDR
This paper proves weak convergence of the generated sequence of distributions to the posterior, and demonstrates the benefits of the framework on challenging inference problems in machine learning and differential equations, using inverse autoregressive flows and polynomial maps as examples of the underlying density estimators.

The Random Feature Model for Input-Output Maps between Banach Spaces

TLDR
The random feature model is viewed as a non-intrusive data-driven emulator, a mathematical framework for its interpretation is provided, and its ability to efficiently and accurately approximate the nonlinear parameter-to-solution maps of two prototypical PDEs arising in physical science and engineering applications is demonstrated.

Model Reduction and Neural Networks for Parametric PDEs

TLDR
A neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation is developed.

A fast and scalable computational framework for large-scale and high-dimensional Bayesian optimal experimental design

TLDR
A fast and scalable computational framework to solve large-scale and high-dimensional Bayesian optimal experimental design problems and proposes an efficient offline-online decomposition for the optimization problem, which is formulated as an optimization problem that seeks to maximize an expected information gain.