Data-driven soliton mappings for integrable fractional nonlinear wave equations via deep learning with Fourier neural operator

  title={Data-driven soliton mappings for integrable fractional nonlinear wave equations via deep learning with Fourier neural operator},
  author={Ming Zhong and Zhenya Yan},
: In this paper, we firstly extend the Fourier neural operator (FNO) to discovery the soliton mapping between two function spaces, where one is the fractional-order index space { ǫ | ǫ ∈ ( 0, 1 ) } in the fractional integrable nonlinear wave equations while another denotes the solitonic solution function space. To be specific, the fractional nonlinear Schr¨odinger (fNLS), fractional Korteweg-de Vries (fKdV), fractional modified Korteweg-de Vries (fmKdV) and fractional sine-Gordon (fsineG… 
1 Citations



Fourier Neural Operator for Parametric Partial Differential Equations

This work forms a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture and shows state-of-the-art performance compared to existing neural network methodologies.

Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators

A new deep neural network called DeepONet can lean various mathematical operators with small generalization error and can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations.

Physics-informed neural networks method in high-dimensional integrable systems

In this paper, the physics-informed neural networks (PINNs) are applied to high-dimensional system to solve the [Formula: see text]-dimensional initial-boundary value problem with [Formula: see text]

The Random Feature Model for Input-Output Maps between Banach Spaces

The random feature model is viewed as a non-intrusive data-driven emulator, a mathematical framework for its interpretation is provided, and its ability to efficiently and accurately approximate the nonlinear parameter-to-solution maps of two prototypical PDEs arising in physical science and engineering applications is demonstrated.

Choose a Transformer: Fourier or Galerkin

It is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary and the newly proposed simple attention-based operator learner, Galerkin Transformer shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts.

Neural Operator: Graph Kernel Network for Partial Differential Equations

The key innovation in this work is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces.