Theory of the backpropagation neural network

@article{HechtNielsen1989TheoryOT,
  title={Theory of the backpropagation neural network},
  author={Robert Hecht-Nielsen},
  journal={International 1989 Joint Conference on Neural Networks},
  year={1989},
  pages={593-605 vol.1}
}
  • R. Hecht-Nielsen
  • Published 1989
  • Computer Science
  • International 1989 Joint Conference on Neural Networks

A more biologically plausible learning rule than backpropagation applied to a network model of cortical area 7a.

Two neural networks are developed with architecture similar to Zipser and Andersen's model and trained to perform the same task using a more biologically plausible learning procedure than backpropagation, which corroborates the validity of this neural network's computational algorithm as a plausible model of how area 7a may perform coordinate transformations.

DESIGN OF NEURAL NETWORK FILTERS

The objective is to clarify a number of phases involved in the design of neural network filter architectures in connection with “black box” modeling tasks such as system identification, inverse modeling and timeseries prediction.

A new algorithm for training multilayer feedforward neural networks

The authors present a new learning and synthesis algorithm for training multilayer feedforward neural networks that can classify both linear separable and linear nonseparable families, whereas the backpropagation algorithm will fail sometimes.

A robust backpropagation learning algorithm for function approximation

A robust BP learning algorithm is derived that is resistant to the noise effects and is capable of rejecting gross errors during the approximation process, and its rate of convergence is improved since the influence of incorrect samples is gracefully suppressed.

Feed Forward Neural Network Entities

Although the entities' concept is still developing, some preliminary results indicate superiority over the single FFNN model when applicable to problems involving high-dimensional data (e.g. financial/meteorological data analysis, etc.).

Explicit solutions of the optimum weights of layered neural networks

  • Xiao-Hu Yu
  • Computer Science
    [Proceedings 1992] IJCNN International Joint Conference on Neural Networks
  • 1992
It is shown that, if the hidden layer units take a sinusoidal activation function, the optimum weights of the three-layer feedforward neural network can be explicitly solved by relating the layered

Robust design of multilayer feedforward neural networks: an experimental approach

Rates of Approximation in a Feedforward Network Depend on the Type of Computational Unit

De Vore et al. proved the following result: if one approximates continuously a class of functions of d variables with bounded partial derivatives on a compacta, in order to accomplish the order of approximation O(1/n), it is necessary to use at least O(n d ) number of neurons, regardless of the activation function.
...

References

SHOWING 1-10 OF 76 REFERENCES

Backpropagation: past and future

  • P. Werbos
  • Mathematics
    IEEE 1988 International Conference on Neural Networks
  • 1988
The author proposes development of a general theory of intelligence in which backpropagation and comparisons to the brain play a central role, and points to a series of intermediate steps and applications leading up to the construction of such generalized systems.

On the use of backpropagation in associative reinforcement learning

  • Ronald J. Williams
  • Computer Science
    IEEE 1988 International Conference on Neural Networks
  • 1988
A description is given of several ways that backpropagation can be useful in training networks to perform associative reinforcement learning tasks and it is observed that such an approach even permits a seamless blend of associatives reinforcement learning and supervised learning within the same network.

Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation

  • F. Pineda
  • Computer Science
    Neural Computation
  • 1989
It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.

Fast learning in artificial neural systems: multilayer perceptron training using optimal estimation

  • J. Shepanski
  • Computer Science
    IEEE 1988 International Conference on Neural Networks
  • 1988
Initial results indicate that optimal estimate training (OET) is a supervised learning technique that is faster and more accurate than backward error propagation and the information content loaded into a set of network interconnection weights is also characterized well.

Dynamic node creation in backpropagation networks

  • T. Ash
  • Computer Science
    International 1989 Joint Conference on Neural Networks
  • 1989
A novel method called dynamic node creation (DNC) that attacks issues of training large networks and of testing networks with different numbers of hidden layer units is presented, which yielded a solution for every problem tried.

Learning representations by back-propagating errors

Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.

Capabilities of three-layered perceptrons

A theorem is proved to the effect that three-layered perceptrons with an infinite number of computing units can represent arbitrary mapping if the desired mapping and the input-output characteristics
...