Semi-supervised Learning of Partial Differential Operators and Dynamical Flows

@article{Rotman2022SemisupervisedLO,
  title={Semi-supervised Learning of Partial Differential Operators and Dynamical Flows},
  author={Michael Rotman and Amit Dekel and Ran Ilan Ber and Lior Wolf and Yaron Oz},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.14366}
}
The evolution of dynamical systems is generically governed by nonlinear partial differential equations (PDEs), whose solution, in a simulation framework, requires vast amounts of computational resources. In this work, we present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture. Our method treats time and space separately. As a result, it successfully propagates initial conditions in continuous time steps by employing the general composition… 
1 Citations

Figures and Tables from this paper

References

SHOWING 1-10 OF 21 REFERENCES

Fourier Neural Operator for Parametric Partial Differential Equations

This work forms a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture and shows state-of-the-art performance compared to existing neural network methodologies.

Model Reduction and Neural Networks for Parametric PDEs

A neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation is developed.

Multipole Graph Neural Operator for Parametric Partial Differential Equations

A novel multi-graph network framework that captures interaction at all ranges with only linear complexity is proposed, Inspired by the classical multipole methods, and can be evaluated in linear time.

Neural Operator: Learning Maps Between Function Spaces

A generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces, formulated by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators.

Neural Operator: Graph Kernel Network for Partial Differential Equations

The key innovation in this work is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces.

DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators

This work proposes deep operator networks (DeepONets) to learn operators accurately and efficiently from a relatively small dataset, and demonstrates that DeepONet significantly reduces the generalization error compared to the fully-connected networks.

The Random Feature Model for Input-Output Maps between Banach Spaces

The random feature model is viewed as a non-intrusive data-driven emulator, a mathematical framework for its interpretation is provided, and its ability to efficiently and accurately approximate the nonlinear parameter-to-solution maps of two prototypical PDEs arising in physical science and engineering applications is demonstrated.

Markov Neural Operators for Learning Chaotic Systems

Experiments show neural operators are more accurate and stable compared to previous methods on chaotic systems such as the Kuramoto-Sivashinsky and Navier-Stokes equations.