• Corpus ID: 224705257

Fourier Neural Operator for Parametric Partial Differential Equations

@article{Li2021FourierNO,
  title={Fourier Neural Operator for Parametric Partial Differential Equations},
  author={Zong-Yi Li and Nikola B. Kovachki and Kamyar Azizzadenesheli and Burigede Liu and Kaushik Bhattacharya and Andrew Stuart and Anima Anandkumar},
  journal={ArXiv},
  year={2021},
  volume={abs/2010.08895}
}
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation… 

Figures and Tables from this paper

NLP Inspired Training Mechanics For Modeling Transient Dynamics

This work introduces teacher forcing and curriculum learning based training mechanics to model vortical flows and shows an enhancement in accuracy for ML models, such as FNO and UNet by more than 50%.

HT-Net: Hierarchical Transformer based Operator Learning Model for Multiscale PDEs

A hierarchical transformer (HT) scheme to learn the solution oper- ator for multiscale PDEs, constructed with a hierarchical architecture with scale adaptive interaction range, such that the features can be computed in a nested manner and with a controllable linear cost.

A composable machine-learning approach for steady-state simulations on high-resolution grids

The proposed CoMLSim approach can simulate PDEs on highly-resolved grids with higher accuracy and generalization to out-of-distribution source terms and geometries than traditional ML baselines and reduces the challenge of generalization that most ML models face.

Linear attention coupled Fourier neural operator for simulation of three-dimensional turbulence

The linear attention coupled Fourier neural operator (LAFNO) is developed for the simulation of 3D isotropic turbulence and free shear turbulence and Numerical simulations show that the linear attention mechanism provides 40% error reduction at the same level of computational cost, and LAFNO can accurately reconstruct a variety of statistics and instantaneous spatial structures of3D turbulence.

Wavelet neural operator: a neural operator for parametric partial differential equations

A novel operator learning algorithm referred to as the Wavelet Neural Operator (WNO) that blends integral kernel with wavelet transformation and is used to build a digital twin capable of predicting Earth’s air temperature based on available historical data.

Learning Transient Partial Differential Equations with Local Neural Operators

A learning framework capable of purely representing the transient PDEs with local neural operators (LNOs) is constructed and successfully applied to solve problems with quite different domains and boundaries, including the internal flow, the external flow, and remarkably, the flow across the cascade of airfoils.

Learning Operators with Coupled Attention

This work proposes a novel operator learning method, LOCA (Learning Operators with Coupled Attention), motivated from the recent success of the attention mechanism, and evaluates the performance of LOCA on several operator learning scenarios involving systems governed by ordinary and partial differential equations, as well as a black-box climate prediction problem.

Machine Learning Accelerated PDE Backstepping Observers

A framework for accelerating PDE observer computations using learning-based approaches that are much faster while maintaining accuracy is proposed, and the recently-developed Fourier Neural Operator (FNO) is employed to learn the functional mapping from the initial observer state and boundary measurements to the state.

Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks

The past decade has seen increasing interest in applying Deep Learning (DL) to Computational Science and Engineering (CSE). Driven by impressive results in applications such as computer vision,

Physics-Guided, Physics-Informed, and Physics-Encoded Neural Networks in Scientific Computing

This study aims to present an in-depth review of the three neural network frameworks used in scientific computing research, and provides a solid starting point for researchers and engineers to comprehend how to integrate layers of physics into neural networks.
...

References

SHOWING 1-10 OF 34 REFERENCES

Multipole Graph Neural Operator for Parametric Partial Differential Equations

A novel multi-graph network framework that captures interaction at all ranges with only linear complexity is proposed, Inspired by the classical multipole methods, and can be evaluated in linear time.

Neural Operator: Graph Kernel Network for Partial Differential Equations

The key innovation in this work is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces.

Machine learning–accelerated computational fluid dynamics

It is shown that using machine learning inside traditional fluid simulations can improve both accuracy and speed, even on examples very different from the training data, which opens the door to applying machine learning to large-scale physical modeling tasks like airplane design and climate prediction.

Fast Fourier Convolution

Fast Fourier convolution (FFC) is a generic operator that can directly replace vanilla convolutions in a large body of existing networks, without any adjustments and with comparable complexity metrics (e.g., FLOPs).

Implicit Neural Representations with Periodic Activation Functions

This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.

The Random Feature Model for Input-Output Maps between Banach Spaces

The random feature model is viewed as a non-intrusive data-driven emulator, a mathematical framework for its interpretation is provided, and its ability to efficiently and accurately approximate the nonlinear parameter-to-solution maps of two prototypical PDEs arising in physical science and engineering applications is demonstrated.

Model Reduction and Neural Networks for Parametric PDEs

A neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation is developed.