Approximation bounds for smooth functions in C(Rd) by neural and mixture networks

@article{Maiorov1998ApproximationBF,
  title={Approximation bounds for smooth functions in C(Rd) by neural and mixture networks},
  author={Vitaly Maiorov and Ron Meir},
  journal={IEEE transactions on neural networks},
  year={1998},
  volume={9 5},
  pages={
          969-78
        }
}
  • V. Maiorov, R. Meir
  • Published 1 September 1998
  • Computer Science, Mathematics
  • IEEE transactions on neural networks
We consider the approximation of smooth multivariate functions in C(IRd) by feedforward neural networks with a single hidden layer of nonlinear ridge functions. Under certain assumptions on the smoothness of the functions being approximated and on the activation functions in the neural network, we present upper bounds on the degree of approximation achieved over the domain IRd, thereby generalizing available results for compact domains. We extend the approximation results to the so-called… 
Approximation Bounds by Neural Networks in Lpomega
TLDR
The upper bounds on the degree of approximation are obtained in the class of functions considered in this paper, approximation of multidimensional functions by feedforward neural networks with one hidden layer of Sigmoidal units and a linear output.
Approximation Bound of Mixture Networks in Lomegap Spaces
TLDR
It is shown that under very mild condition on activation functions, the mixture neural networks have the same approximation order with that of the normal feedforward sigmoid neural networks.
Computing the Approximation Error for Neural Networks with Weights Varying on Fixed Directions
  • V. Ismailov
  • Computer Science, Mathematics
    Numerical Functional Analysis and Optimization
  • 2019
Abstract We obtain a sharp lower bound estimate for the approximation error of a continuous function by single hidden layer neural networks with a continuous activation function and weights varying
The errors of simultaneous approximation of multivariate functions by neural networks
Estimation of Approximating Rate for Neural Network inLwp Spaces
TLDR
The upper bound on the degree of approximation can be obtained for the class of Soblove functions by adopting a set of orthogonal polynomial basis and under certain assumptions for the governing activation functions of the neural network.
Pointwise Approximation for Neural Networks
It is shown in this paper by a constructive method that for any f∈C(m)[a,b], the function and its m order derivatives can be simultaneously approximated by a neural network with one hidden layer in
Advances in Neural Networks – ISNN 2004
TLDR
The upper bounds on the degree of approximation are obtained in the class of functions considered in this paper, approximation of multidimensional functions by feedforward neural networks with one hidden layer of Sigmoidal units and a linear output.
Essential rate for approximation by spherical neural networks
Characterization of Degree of Approximation for Neural Networks with One Hidden Layer
TLDR
By establishing both upper and lower bound estimations on degree of approximation, the essential approximation ability of a class of FNNs is clarified in terms of the modulus of smoothness of functions to be approximated.
The errors of approximation for feedforward neural networks in the Lp metric
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 18 REFERENCES
Approximation capability in C(R¯n) by multilayer feedforward networks and related problems
TLDR
It is found that the boundedness condition on the sigmoidal function plays an essential role in the approximation, as contrast to continuity or monotonity condition.
Neural Networks for Optimal Approximation of Smooth and Analytic Functions
  • H. Mhaskar
  • Mathematics, Computer Science
    Neural Computation
  • 1996
We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation
Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
TLDR
This work extends Barron's results to feedforward networks with possibly nonsigmoid activation functions approximating mappings and their derivatives simultaneously, showing that the approximation error decreases at rates as fast as n1/2, where n is the number of hidden units.
Approximation by superpositions of a sigmoidal function
  • G. Cybenko
  • Computer Science
    Math. Control. Signals Syst.
  • 1989
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real
Universal approximation bounds for superpositions of a sigmoidal function
  • A. Barron
  • Computer Science
    IEEE Trans. Inf. Theory
  • 1993
TLDR
The approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings and the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption.
Error Bounds for Functional Approximation and Estimation Using Mixtures of Experts
TLDR
It is observed that the MEM is at least as powerful as a class of neural networks, in a sense that will be made precise, and upper bounds on the approximation error are established for a wide class of target functions.
Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function
Accuracy analysis for wavelet approximations
TLDR
Unlike neural network training, this estimation procedure does not rely on stochastic gradient type techniques such as the celebrated "backpropagation" and it completely avoids the problem of poor convergence or undesirable local minima.
Risk bounds for model selection via penalization
TLDR
It is shown that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve, which quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size.
On Best Approximation by Ridge Functions
We consider best approximation of some function classes by the manifold Mn consisting of sums of n arbitrary ridge functions. It is proved that the deviation of the Sobolev class Wr, d2 from the
...
1
2
...