Theory of the backpropagation neural network

@article{HechtNielsen1989TheoryOT,
  title={Theory of the backpropagation neural network},
  author={Robert Hecht-Nielsen},
  journal={International 1989 Joint Conference on Neural Networks},
  year={1989},
  pages={593-605 vol.1}
}
  • R. Hecht-Nielsen
  • Published 1989
  • Mathematics, Computer Science
  • International 1989 Joint Conference on Neural Networks
The author presents a survey of the basic theory of the backpropagation neural network architecture covering architectural design, performance measurement, function approximation capability, and learning. The survey includes previously known material, as well as some new results, namely, a formulation of the backpropagation neural network architecture to make it a valid neural network (past formulations violated the locality of processing restriction) and a proof that the backpropagation mean… Expand
A more biologically plausible learning rule than backpropagation applied to a network model of cortical area 7a.
TLDR
Two neural networks are developed with architecture similar to Zipser and Andersen's model and trained to perform the same task using a more biologically plausible learning procedure than backpropagation, which corroborates the validity of this neural network's computational algorithm as a plausible model of how area 7a may perform coordinate transformations. Expand
Feed Forward Neural Network Entities
TLDR
Although the entities' concept is still developing, some preliminary results indicate superiority over the single FFNN model when applicable to problems involving high-dimensional data (e.g. financial/meteorological data analysis, etc.). Expand
Robust design of multilayer feedforward neural networks: an experimental approach
TLDR
This article develops a systematic, experimental strategy which emphasizes simultaneous optimization of BPN parameters under various noise conditions and shows that fine-tuning the BPN output is effective in improving the signal-to-noise ratio. Expand
Rates of Approximation in a Feedforward Network Depend on the Type of Computational Unit
The approximation capabilities of feedforward neural networks with a single hidden layer and with various activation functions has been widely studied ([19], [8], [1], [2], [13]). Mhaskar andExpand
A neural network learning algorithm tailored for VLSI implementation
This paper describes concepts that optimize an on-chip learning algorithm for implementation of VLSI neural networks with conventional technologies. The network considered comprises an analogExpand
Approximation theory of the MLP model in neural networks
In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. The MLP model is one of the more popular andExpand
A Novel Design Method for Multilayer Feedforward Neural Networks
TLDR
It is shown in several examples that the proposed model and the design method are capable of rapidly learning the training patterns compared to conventional multilayer feedforward neural networks with random initialization techniques. Expand
Neural subnet design by direct polynomial mapping
TLDR
A method for the analysis and synthesis of single-input, single-output neural subnetworks is described and it is shown that the mapped subnets avoid local minima which backpropagation-trained subnets get trapped in and that the mapping approach is much faster. Expand
FEEDFORWARD NEURAL NETWORKS FOR THE IDENTIFICATION OF DYNAMIC PROCESSES
Abstract This paper presents an introduction to the use of neural network computational algorithms for the identification of dynamic systems. Simulated linear and non-linear systems and real plantExpand
Dynamic backpropagation algorithm for neural network controlled resonator-bank architecture
TLDR
Simulation results show that the neural network controlled resonator-bank architecture is computationally feasible and can be used as a general building block in a wide range of identification and control problems. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 56 REFERENCES
Backpropagation: past and future
  • P. Werbos
  • Computer Science
  • IEEE 1988 International Conference on Neural Networks
  • 1988
TLDR
The author proposes development of a general theory of intelligence in which backpropagation and comparisons to the brain play a central role, and points to a series of intermediate steps and applications leading up to the construction of such generalized systems. Expand
Learning representations by back-propagating errors
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain. Expand
Neocognitron: A hierarchical neural network capable of visual pattern recognition
TLDR
The operation of tolerating positional error a little at a time at each stage, rather than all in one step, plays an important role in endowing the network with an ability to recognize even distorted patterns. Expand
Dynamic Node Creation in Backpropagation Networks
TLDR
A new method called Dynamic Node Creation (DNC) which automatically grows BP networks until the target problem is solved, and yielded a solution for every problem tried. Expand
Neurons with graded response have collective computational properties like those of two-state neurons.
  • J. Hopfield
  • Computer Science, Mathematics
  • Proceedings of the National Academy of Sciences of the United States of America
  • 1984
TLDR
A model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied and collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons are studied. Expand
Neural Networks and Natural Intelligence
From the Publisher: Stephen Grossberg and his colleagues at Boston University's Center for Adaptive Systems are producing some of the most exciting research in the neural network approach to makingExpand
There exists a neural network that does not make avoidable mistakes
  • A. Gallant, H. White
  • Mathematics, Computer Science
  • IEEE 1988 International Conference on Neural Networks
  • 1988
The authors show that a multiple-input, single-output, single-hidden-layer feedforward network with (known) hardwired connections from input to hidden layer, monotone squashing at the hidden layerExpand
A massively parallel architecture for a self-organizing neural pattern recognition machine
TLDR
A neural network architecture for the learning of recognition categories is derived which circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models. Expand
Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position
TLDR
The neocognitron recognizes stimulus patterns correctly without being affected by shifts in position or even by considerable distortions in shape of the stimulus patterns. Expand
Learning of word stress in a sub-optimal second order back-propagation neural network
TLDR
The authors show an example of an efficient and easy solution, using a neural network, of a problem that cannot be easily solved with rules, of the localization of primary word stress in text-to-speech synthesis of Italian. Expand
...
1
2
3
4
5
...