Approximation in shift-invariant spaces with deep ReLU neural networks

@article{Yang2022ApproximationIS,
  title={Approximation in shift-invariant spaces with deep ReLU neural networks},
  author={Yunfei Yang and Yang Wang},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2022},
  volume={153},
  pages={
          269-281
        }
}
  • Yunfei Yang, Yang Wang
  • Published 25 May 2020
  • Computer Science, Mathematics
  • Neural networks : the official journal of the International Neural Network Society

Figures and Tables from this paper

Approximation bounds for norm constrained neural networks with applications to regression and GANs
TLDR
Upper and lower bounds on the approximation error of these networks for smooth function classes are proved and it is shown that GANs can achieve optimal rate of learning probability distributions, when the discriminator is a properly chosen norm constrained neural network.
Deep Network Approximation With Accuracy Independent of Number of Neurons∗
TLDR
It is proved that σ-activated networks with width 36d(2d + 1) and depth 11 can approximate any continuous function on a d-dimensioanl hypercube within an arbitrarily small error.
Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
TLDR
It is proved that σ-activated networks with width 36d(2d + 1) and depth 11 can approximate any continuous function on a d-dimensional hypercube within an arbitrarily small error.
Solving PDEs on Unknown Manifolds with Machine Learning
TLDR
A mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifolds, identified with point clouds, based on diffusion maps (DM) and deep learning and the proposed NN solver can robustly generalize the PDE solution on new data points with generalization errors that are almost identical to the training errors.
Deep Network with Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
TLDR
It is shown that Floor-ReLU networks with width max{d,5N+13} and depth 64dL+3 can uniformly approximate a Hölder function f on [0,1]d with an approximation error 3λdα/2N-αL, where α∈(0, 1] and λ are the Hö Elder order and constant.
Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory
TLDR
This paper shows that gradient descent can identify a global minimizer of the optimization problem with a well-controlled generalization error in the case of two-layer neural networks in the over-parameterization regime.
Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth
TLDR
This new network overcomes the curse of dimensionality in approximation power since this approximation order is essentially $\sqrt{d}$ times a function of $N$ and $L$ independent of $d$.

References

SHOWING 1-10 OF 62 REFERENCES
Deep Network Approximation for Smooth Functions
TLDR
It is established that optimal approximation error characterization of deep ReLU networks for smooth functions in terms of both width and depth simultaneously and is non-asymptotic in the sense that it is valid for arbitrary width anddepth specified by $N\in\mathbb{N}^+$ and $L\in-N^+$, respectively.
Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers
TLDR
The results show that for two general kinds of concept class the V-C dimension is polynomially bounded in the number of real numbers used to define a problem instance, and that in the continuous case, as in the discrete, the real barrier to efficient learning in the Occam sense is complexity- theoretic and not information-theoretic.
Neural Network Learning - Theoretical Foundations
TLDR
The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction, and discuss the computational complexity of neural network learning.
Nearly-tight VC-dimension bounds for piecewise linear neural networks
TLDR
This work proves new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function, and proves a tight bound $\Theta(W U)$ on theVC-dimension.
Constructive Approximation
TLDR
This paper works on [-1, 1 ] and obtains Markov-type estimates for the derivatives of polynomials from a rather wide family of classes of constrained polynomes and results turn out to be sharp.
Memory capacity of neural networks with threshold and ReLU activations
TLDR
Addressing a 1988 open question of Baum, it is proved that this phenomenon holds for general multilayered perceptrons, i.e. neural networks with threshold activation functions, or with any mix of threshold and ReLU activations.
The phase diagram of approximation rates for deep neural networks
TLDR
It is proved that using both sine and ReLU activations theoretically leads to very fast, nearly exponential approximation rates, thanks to the emerging capability of the network to implement efficient lookup operations.
Approximation of Distribution Spaces by Means of Kernel Operators
We investigate conditions on kernel operators in order to provide prescribed orders of approximation in the Triebel-Lizorkin spaces. Our approach is based on the study of the boundedness of integral
Deep Neural Network Approximation Theory
TLDR
Deep networks provide exponential approximation accuracy—i.e., the approximation error decays exponentially in the number of nonzero weights in the network— of the multiplication operation, polynomials, sinusoidal functions, and certain smooth functions.
Optimal approximation of continuous functions by very deep ReLU networks
TLDR
It is proved that constant-width fully-connected networks of depth $L\sim W$ provide the fastest possible approximation rate $\|f-\widetilde f\|_\infty = O(\omega_f(O(W^{-2/\nu})))$ that cannot be achieved with less deep networks.
...
...