# Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces

@article{Liu2022DeepNE, title={Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces}, author={Hao Liu and Haizhao Yang and Minshuo Chen and Tuo Zhao and Wenjing Liao}, journal={ArXiv}, year={2022}, volume={abs/2201.00217} }

Learning operators between infinitely dimensional spaces is an important learning task arising in wide applications in machine learning, imaging science, mathematical modeling and simulations, etc. This paper studies the nonparametric estimation of Lipschitz operators using deep neural networks. Non-asymptotic upper bounds are derived for the generalization error of the empirical risk minimizer over a properly chosen network class. Under the assumption that the target operator exhibits a low…

## 6 Citations

Neural and gpc operator surrogates: construction and expression rate bounds

- MathematicsArXiv
- 2022

Approximation rates are analyzed for deep surrogates of maps between inﬁnite-dimensional function spaces, arising e.g. as data-to-solution maps of linear and nonlinear partial diﬀerential equations.…

Metric Hypertransformers are Universal Adapted Maps

- Mathematics, Computer ScienceArXiv
- 2022

The MHT models introduced here are able to approximate a broad range of stochastic processes’ kernels, including solutions to SDEs, many processes with arbitrarily long memory, and functions mapping sequential data to sequences of forward rate curves.

Local approximation of operators

- Mathematics, Computer ScienceArXiv
- 2022

Many applications, such as system identification, classification of time series, direct and inverse problems in partial differential equations, and uncertainty quantification lead to the question of…

Benefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint

- Computer ScienceICML
- 2022

This work proves that large ConvResNets can not only approximate a target function in terms of function value, but also exhibit sufficient first-order smoothness, and extends the theory to approximating functions supported on a lowdimensional manifold.

IAE-Net: Integral Autoencoders for Discretization-Invariant Learning

- Computer ScienceArXiv
- 2022

A novel deep learning framework based on integral autoencoders (IAE-Net) for discretization invariant learning that achieves state-of-the-art performance in existing applications and creates a wide range of new applications where existing methods fail.

Approximation of Functionals by Neural Network without Curse of Dimensionality

- Computer Science, MathematicsArXiv
- 2022

A neural network is established to approximate functionals, which are maps from inﬁnite dimensional spaces to in-dimensional spaces, to create a Barron spectral space of functionals.

## References

SHOWING 1-10 OF 103 REFERENCES

Convergence Rates for Learning Linear Operators from Noisy Data

- MathematicsArXiv
- 2021

This work establishes posterior contraction rates with respect to a family of Bochner norms as the number of data tend to infinity and derive related lower bounds on the estimation error and connects the posterior consistency results to nonparametric learning theory.

Deep Nonparametric Regression on Approximately Low-dimensional Manifolds

- Computer Science
- 2021

This paper derives non-asymptotic upper bounds for the prediction error of the empirical risk minimizer for feedforward deep neural regression and proposes a notion of network relative efficiency between two types of neural networks, which provides a quantitative measure for evaluating the relative merits of different network structures.

Error estimates for DeepOnets: A deep learning framework in infinite dimensions

- Computer Science, MathematicsTransactions of Mathematics and Its Applications
- 2022

It is rigorously proved that DeepONets can break this curse of dimensionality and derive almost optimal error bounds with very general affine reconstructors and with random sensor locations as well as bounds on the generalization error, using covering number arguments.

Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds

- Computer ScienceNeurIPS
- 2019

This paper proves that neural networks can efficiently approximate functions supported on low dimensional manifolds, with an exponent depending on the intrinsic dimension of the data and the smoothness of the function.

Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality

- Computer ScienceJ. Mach. Learn. Res.
- 2020

This study derives bounds for an approximation error and a generalization error regarding DNNs with intrinsically low dimensional covariates and proves that an intrinsic low dimensionality of covariates is the main factor that determines the performance of deep neural networks.

Besov Function Approximation and Binary Classification on Low-Dimensional Manifolds Using Convolutional Residual Networks

- Computer ScienceICML
- 2021

This work establishes theoretical guarantees of convolutional residual networks (ConvResNet) in terms of function approximation and statistical estimation for binary classification, and proves that if the network architecture is properly chosen, ConvResNets can approximate Besov functions on manifolds with arbitrary accuracy.

Deep ReLU network approximation of functions on a manifold

- Computer Science, MathematicsArXiv
- 2019

This work studies a regression problem with inputs on a $d^*$-dimensional manifold that is embedded into a space with potentially much larger ambient dimension, and derives statistical convergence rates for the estimator minimizing the empirical risk over all possible choices of bounded network parameters.

Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality

- Computer ScienceICLR
- 2019

A new approximation and estimation error analysis of deep learning with the ReLU activation for functions in a Besov space and its variant with mixed smoothness shows that deep learning has higher adaptivity to the spatial inhomogeneity of the target function than other estimators such as linear ones.

Doubly Robust Off-Policy Learning on Low-Dimensional Manifolds by Deep Neural Networks

- Computer ScienceArXiv
- 2020

The theory shows that deep neural networks are adaptive to the low-dimensional geometric structures of the covariates, and partially explains the success of deep learning for causal inference.

Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators

- Computer ScienceNat. Mach. Intell.
- 2021

A new deep neural network called DeepONet can lean various mathematical operators with small generalization error and can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations.