Approximation by superpositions of a sigmoidal function

  title={Approximation by superpositions of a sigmoidal function},
  author={George V. Cybenko},
  journal={Mathematics of Control, Signals and Systems},
  • G. Cybenko
  • Published 1 December 1989
  • Computer Science
  • Mathematics of Control, Signals and Systems
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well… 
A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function
This work constructs algorithmically a smooth, sigmoidal, almost monotone activation function providing approximation to an arbitrary continuous function within any degree of accuracy.
Approximation properties of a multilayered feedforward artificial neural network
  • H. Mhaskar
  • Mathematics, Computer Science
    Adv. Comput. Math.
  • 1993
We prove that an artificial neural network with multiple hidden layers and akth-order sigmoidal response function can be used to approximate any continuous function on any compact subset of a
Approximation of functions on a compact set by finite sums of a sigmoid function without scaling
We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation
Approximation of functions with one-bit neural networks
It is shown that any smooth multivariate function can be arbitrarily well approximated by an appropriate coarsely quantized neural network and provide a quantitative approximation rate.
Approximation of polynomials by a neural network having rather a small number of units
It is proposed that neural networks generally realize approximation by surface-fitting methods, showing versatility of a linear sum of a few basis functions.
Universal Approximation Using Radial-Basis-Function Networks
It is proved thatRBF networks having one hidden layer are capable of universal approximation, and a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.
General approximation theorem on feedforward networks
  • G. Huang, H. A. Babri
  • Computer Science
    Proceedings of ICICS, 1997 International Conference on Information, Communications and Signal Processing. Theme: Trends in Information Systems Engineering and Wireless Multimedia Communications (Cat.
  • 1997
Standard feedforward neural networks with as few as a single hidden layer and arbitrary bounded nonlinear activation functions which have two unequal limits in infinities can uniformly approximate arbitrary continuous mappings on R/sup n/ with any precision.
In a recent paper certain approximations to continuous nonlinear functionals de-ned on an L p space (1 < p < 1) are shown to exist. These approximations may be realized by sigmoidal neural networks


Constructive approximations for neural networks by sigmoidal functions
  • L. Jones
  • Computer Science, Mathematics
    Proc. IEEE
  • 1990
G. Cybenko (1989) has demonstrated the existence of uniform approximations to any continuous f provided that sigma is continuous, relying on the Hahn-Branch theorem and the dual characterization of C(I/sup n/).
Construction of neural nets using the radon transform
The authors present a method for constructing a feedforward neural net implementing an arbitrarily good approximation to any L/sub 2/ function over (-1, 1)/sup n/. The net uses n input nodes, a
On Nonlinear Functions of Linear Combinations
Projection pursuit algorithms approximate a function of p variables by a sum of nonlinear functions of linear combinations: \[ (1)\qquad f\left( {x_1 , \cdots ,x_p } \right) \doteq \sum_{i = 1}^n
Classification capabilities of two-layer neural nets
The authors show that two-layer nets are capable of forming disconnected decision regions as well and derive an expression for the number of cells in the input space that are to be grouped together to form the decision regions.
What Size Net Gives Valid Generalization?
It is shown that if m O(W/ ∊ log N/∊) random examples can be loaded on a feedforward network of linear threshold functions with N nodes and W weights, so that at least a fraction 1 ∊/2 of the examples are correctly classified, then one has confidence approaching certainty that the network will correctly classify a fraction 2 ∊ of future test examples drawn from the same distribution.
An introduction to computing with neural nets
This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Neural Net and Traditional Classifiers
It is demonstrated that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions.
On the Representation of Continuous Functions of Several Variables as Superpositions of Continuous Functions of one Variable and Addition
The aim of this paper is to present a brief proof of the following theorem: Theorem. For any integer n ≥ 2 there are continuous real functions ψ p q (x) on the closed unit interval E 1 = [0;1] such