Learning representations by back-propagating errors

@article{Rumelhart1986LearningRB,
  title={Learning representations by back-propagating errors},
  author={D. Rumelhart and Geoffrey E. Hinton and R. J. Williams},
  journal={Nature},
  year={1986},
  volume={323},
  pages={533-536}
}
  • D. Rumelhart, Geoffrey E. Hinton, R. J. Williams
  • Published 1986
  • Computer Science
  • Nature
  • We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured… CONTINUE READING
    16,863 Citations

    Topics from this paper

    Learning sets of filters using back-propagation
    • 120
    • PDF
    Improving generalization in backpropagation networks with distributed bottlenecks
    • J. Kruschke
    • Computer Science
    • International 1989 Joint Conference on Neural Networks
    • 1989
    • 23
    20 – CONNECTIONIST LEARNING PROCEDURES1
    • 133
    Connectionist Learning Procedures
    • 1,445
    • PDF
    Crossprop: Learning Representations by Stochastic Meta-Gradient Descent in Neural Networks
    • 2
    • PDF
    Back propagation neural networks.
    • M. Buscema
    • Computer Science, Medicine
    • Substance use & misuse
    • 1998
    • 98
    A structural learning by adding independent noises to hidden units
    • 8
    Improved generalization of neural classifiers with enforced internal representation
    • 22

    References

    SHOWING 1-3 OF 3 REFERENCES
    Principles of Neurodynamics {Spartan
    • 1961
    Principles Of Neurodynamics
    • 2,341