Learning capability and storage capacity of two-hidden-layer feedforward networks
@article{Huang2003LearningCA, title={Learning capability and storage capacity of two-hidden-layer feedforward networks}, author={Guangbin Huang}, journal={IEEE transactions on neural networks}, year={2003}, volume={14 2}, pages={ 274-81 } }
The problem of the necessary complexity of neural networks is of interest in applications. In this paper, learning capability and storage capacity of feedforward neural networks are considered. We markedly improve the recent results by introducing neural-network modularity logically. This paper rigorously proves in a constructive method that two-hidden-layer feedforward networks (TLFNs) with 2/spl radic/(m+2)N (/spl Lt/N) hidden neurons can learn any N distinct samples (x/sub i/, t/sub i/) with…
659 Citations
Simplification of a specific two-hidden-layer feedforward networks
- Computer ScienceFourth International Conference on Information, Communications and Signal Processing, 2003 and the Fourth Pacific Rim Conference on Multimedia. Proceedings of the 2003 Joint
- 2003
A method is introduced to simplify the structure of the TLFNs by introducing a new type of quantizers that unite two previous neurons A/sup (p)/ and B/Sup (p/ into a single neuron.
A Real-Time Learning Algorithm for Two-Hidden-Layer Feedforward Networks
- Computer Science2003 4th International Conference on Control and Automation Proceedings
- 2003
An improved constructive method of TLFN with real-time learning capacity is introduced to prove that both the training and generalization errors of the new TLFN can reach arbitrarily small values if sufficient distinctive training samples are provided.
Accelerated Optimal Topology Search for Two-Hidden-Layer Feedforward Neural Networks
- Computer ScienceEANN
- 2016
Two-hidden-layer feedforward neural networks are investigated for the existence of an optimal hidden node ratio and the heuristic n_{1} = int(0.5n_{h} + 1) reduced the complexity of an exhaustive search from quadratic, to linear in n, with very little penalty.
Upper bounds on the node numbers of hidden layers in MLPs
- Computer ScienceNeural Network World
- 2021
An upper bound on the node number of each hidden layer for the most general feedforward neural networks called multilayer perceptrons (MLP), from an algebraic point of view is given.
Pulling back error to the hidden-node parameter technology: Single-hidden-layer feedforward network without output weight
- Computer ScienceArXiv
- 2014
This paper indicates that in order to let SLFNs work as universal approximators, one may simply calculate the hidden node parameter only and the output weight is not needed at all and this proposed neural network architecture can be considered as a standard SLFN with fixing output weight equal to an unit vector.
On Theoretical Analysis of Single Hidden Layer Feedforward Neural Networks with Relu Activations
- Computer Science2019 34rd Youth Academic Annual Conference of Chinese Association of Automation (YAC)
- 2019
This note considers extreme learning machine that adopts non-smooth function as activation, proposing that a Relu activated single hidden layer feedforward neural network (SLFN) is capable of fitting given training data points with zero error under the condition that sufficient hidden neurons are provided at the hidden layer.
On the Optimal Node Ratio between Hidden Layers: A Probabilistic Study
- Computer Science
- 2016
The findings were that the heuristic n1 = 0.5nh + 1 has an average probability of at least 0.85 of finding a network with a generalisation error within 0.18% of the best generaliser.
APPROACH TO THE SYNTHESIS OF NEURAL NETWORK STRUCTURE DURING CLASSIFICATION
- Computer Science
- 2020
It is shown that artificial neural network with a two hidden layer feed forward Neural network with d inputs, d neurons in the first hidden layer, 2d+2 neuron in the second hidden layer and with a sigmoidal infinitely differentiable function can solve classification and pattern problems with arbitrary accuracy.
An Overview of Sequential Learning Algorithms for Single Hidden Layer Networks: Current Issues & Future Trends
- Computer Science
- 2020
A brief survey of the commonly used sequential-learning algorithms used with single hidden layer feed-forward neural networks is presented, and a glimpse at the different kinds that are available in the literature up until now, how they have developed throughout the years, and their relative execution is summarized.
Universal approximation using incremental constructive feedforward networks with random hidden nodes
- Computer ScienceIEEE Trans. Neural Networks
- 2006
This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.
References
SHOWING 1-10 OF 32 REFERENCES
Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions
- Computer ScienceIEEE Trans. Neural Networks
- 1998
This paper rigorously proves that standard single-hidden layer feedforward networks with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples with zero error.
Capabilities of a four-layered feedforward neural network: four layers versus three
- Computer ScienceIEEE Trans. Neural Networks
- 1997
A proof is given showing that a three-layered feedforward network with N-1 hidden units can give any N input-target relations exactly, and a four-layering network is constructed and is found to give anyN input- target relations with a negligibly small error using only (N/2)+3 hidden units.
Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions
- Computer ScienceInternational 1989 Joint Conference on Neural Networks
- 1989
Multilayer feedforward networks possess universal approximation capabilities by virtue of the presence of intermediate layers with sufficiently many parallel processors; the properties of the intermediate-layer activation function are not so crucial.
UNIQUENESS OF WEIGHTS FOR NEURAL NETWORKS
- Mathematics
- 1993
This paper assumes that not only satises (*), but also the following extra condition, which appeared above in the context of single-hidden layer nets with no osets: (k) (0) 6 = 0 for innitely many integers k.
Sample sizes for sigmoidal neural networks
- Computer ScienceCOLT '95
- 1995
This paper applies the theory of Probably Approximately Correct (PAC) learning to feedforward neural networks with sigmoidal activation functions. Despite the best known upper bound on the VC…
The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network
- Computer ScienceIEEE Trans. Inf. Theory
- 1998
Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights.
Comments on "Approximation capability in C(Rn) by multilayer feedforward networks and related problems"
- MathematicsIEEE Trans. Neural Networks
- 1998
The conjecture that the boundedness of the sigmoidal function is a necessary and sufficient condition for the validity of the approximation theorem is not correct and boundedness and unequal limits at infinities conditions on the activation functions are sufficient, but not necessary in C(Rn).
Approximation capability in C(R¯n) by multilayer feedforward networks and related problems
- Mathematics, Computer ScienceIEEE Trans. Neural Networks
- 1995
It is found that the boundedness condition on the sigmoidal function plays an essential role in the approximation, as contrast to continuity or monotonity condition.
For Valid Generalization the Size of the Weights is More Important than the Size of the Network
- Computer ScienceNIPS
- 1996
This paper shows that if a large neural network is used for a pattern classification problem, and the learning algorithm finds a network with small weights that has small squared error on the…