Variable-Input Deep Operator Networks

  title={Variable-Input Deep Operator Networks},
  author={Michael Prasthofer and Tim De Ryck and Siddhartha Mishra},
Existing architectures for operator learning require that the number and locations of sensors (where the input functions are evaluated) remain the same across all training and test samples, significantly restricting the range of their applicability. We address this issue by proposing a novel operator learning framework, termed Variable-Input Deep Operator Network (VIDON), which allows for random sensors whose number and locations can vary across samples. VIDON is invariant to permutations of… 


DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators
This work proposes deep operator networks (DeepONets) to learn operators accurately and efficiently from a relatively small dataset, and demonstrates that DeepONet significantly reduces the generalization error compared to the fully-connected networks.
Learning Operators with Coupled Attention
This work proposes a novel operator learning method, LOCA (Learning Operators with Coupled Attention), motivated from the recent success of the attention mechanism, and evaluates the performance of LOCA on several operator learning scenarios involving systems governed by ordinary and partial differential equations, as well as a black-box climate prediction problem.
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks
This work presents an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set, and reduces the computation time of self-attention from quadratic to linear in the number of Elements in the set.
Neural Operator: Learning Maps Between Function Spaces
A generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces, formulated by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators.
Neural Processes
This work introduces a class of neural latent variable models which it calls Neural Processes (NPs), combining the best of both worlds: probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability.
On the Limitations of Representing Functions on Sets
It is proved that an implementation of this model via continuous mappings (as provided by e.g. neural networks or Gaussian processes) actually imposes a constraint on the dimensionality of the latent space.
Physics-Informed Neural Operator for Learning Partial Differential Equations
Experiments show PINO outperforms previous ML methods on many popular PDE families while retaining the extraordinary speed-up of FNO compared to solvers.
Error estimates for DeepOnets: A deep learning framework in infinite dimensions
It is rigorously proved that DeepONets can break this curse of dimensionality and derive almost optimal error bounds with very general affine reconstructors and with random sensor locations as well as bounds on the generalization error, using covering number arguments.
Choose a Transformer: Fourier or Galerkin
It is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary and the newly proposed simple attention-based operator learner, Galerkin Transformer shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts.
Attentive Neural Processes
Attention is incorporated into NPs, allowing each input location to attend to the relevant context points for the prediction, which greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.