Approximation bounds for smooth functions in C(Rd) by neural and mixture networks
@article{Maiorov1998ApproximationBF,
title={Approximation bounds for smooth functions in C(Rd) by neural and mixture networks},
author={Vitaly Maiorov and Ron Meir},
journal={IEEE transactions on neural networks},
year={1998},
volume={9 5},
pages={
969-78
}
}We consider the approximation of smooth multivariate functions in C(IRd) by feedforward neural networks with a single hidden layer of nonlinear ridge functions. Under certain assumptions on the smoothness of the functions being approximated and on the activation functions in the neural network, we present upper bounds on the degree of approximation achieved over the domain IRd, thereby generalizing available results for compact domains. We extend the approximation results to the so-called…
Topics from this paper
69 Citations
Approximation Bounds by Neural Networks in Lpomega
- Computer Science, MathematicsISNN
- 2004
The upper bounds on the degree of approximation are obtained in the class of functions considered in this paper, approximation of multidimensional functions by feedforward neural networks with one hidden layer of Sigmoidal units and a linear output.
Approximation Bound of Mixture Networks in Lomegap Spaces
- Computer ScienceISNN
- 2006
It is shown that under very mild condition on activation functions, the mixture neural networks have the same approximation order with that of the normal feedforward sigmoid neural networks.
Computing the Approximation Error for Neural Networks with Weights Varying on Fixed Directions
- Computer Science, MathematicsNumerical Functional Analysis and Optimization
- 2019
Abstract We obtain a sharp lower bound estimate for the approximation error of a continuous function by single hidden layer neural networks with a continuous activation function and weights varying…
The errors of simultaneous approximation of multivariate functions by neural networks
- MathematicsComput. Math. Appl.
- 2011
Estimation of Approximating Rate for Neural Network inLwp Spaces
- Computer ScienceJ. Appl. Math.
- 2012
The upper bound on the degree of approximation can be obtained for the class of Soblove functions by adopting a set of orthogonal polynomial basis and under certain assumptions for the governing activation functions of the neural network.
Pointwise Approximation for Neural Networks
- Computer ScienceISNN
- 2005
It is shown in this paper by a constructive method that for any f∈C(m)[a,b], the function and its m order derivatives can be simultaneously approximated by a neural network with one hidden layer in…
Advances in Neural Networks – ISNN 2004
- Computer Science, MathematicsLecture Notes in Computer Science
- 2004
The upper bounds on the degree of approximation are obtained in the class of functions considered in this paper, approximation of multidimensional functions by feedforward neural networks with one hidden layer of Sigmoidal units and a linear output.
Essential rate for approximation by spherical neural networks
- Computer Science, MathematicsNeural Networks
- 2011
Characterization of Degree of Approximation for Neural Networks with One Hidden Layer
- Computer Science2006 International Conference on Machine Learning and Cybernetics
- 2006
By establishing both upper and lower bound estimations on degree of approximation, the essential approximation ability of a class of FNNs is clarified in terms of the modulus of smoothness of functions to be approximated.
The errors of approximation for feedforward neural networks in the Lp metric
- Computer ScienceMath. Comput. Model.
- 2009
References
SHOWING 1-10 OF 18 REFERENCES
Approximation capability in C(R¯n) by multilayer feedforward networks and related problems
- Mathematics, Computer ScienceIEEE Trans. Neural Networks
- 1995
It is found that the boundedness condition on the sigmoidal function plays an essential role in the approximation, as contrast to continuity or monotonity condition.
Neural Networks for Optimal Approximation of Smooth and Analytic Functions
- Mathematics, Computer ScienceNeural Computation
- 1996
We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation…
Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Computer ScienceNeural Computation
- 1994
This work extends Barron's results to feedforward networks with possibly nonsigmoid activation functions approximating mappings and their derivatives simultaneously, showing that the approximation error decreases at rates as fast as n1/2, where n is the number of hidden units.
Approximation by superpositions of a sigmoidal function
- Computer ScienceMath. Control. Signals Syst.
- 1989
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real…
Universal approximation bounds for superpositions of a sigmoidal function
- Computer ScienceIEEE Trans. Inf. Theory
- 1993
The approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings and the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption.
Error Bounds for Functional Approximation and Estimation Using Mixtures of Experts
- Computer ScienceIEEE Trans. Inf. Theory
- 1998
It is observed that the MEM is at least as powerful as a class of neural networks, in a sense that will be made precise, and upper bounds on the approximation error are established for a wide class of target functions.
Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function
- Computer ScienceNeural Networks
- 1993
Accuracy analysis for wavelet approximations
- Computer ScienceIEEE Trans. Neural Networks
- 1995
Unlike neural network training, this estimation procedure does not rely on stochastic gradient type techniques such as the celebrated "backpropagation" and it completely avoids the problem of poor convergence or undesirable local minima.
Risk bounds for model selection via penalization
- Mathematics, Computer Science
- 1999
It is shown that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve, which quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size.
On Best Approximation by Ridge Functions
- Mathematics
- 1999
We consider best approximation of some function classes by the manifold Mn consisting of sums of n arbitrary ridge functions. It is proved that the deviation of the Sobolev class Wr, d2 from the…