Learning representations by back-propagating errors
@article{Rumelhart1986LearningRB, title={Learning representations by back-propagating errors}, author={D. Rumelhart and Geoffrey E. Hinton and R. J. Williams}, journal={Nature}, year={1986}, volume={323}, pages={533-536} }
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured… CONTINUE READING
Topics from this paper
Paper Mentions
Blog Post
News Article
Blog Post
16,863 Citations
Distributed bottlenecks for improved generalization in back-propagation networks
- Computer Science
- 1989
- 11
- PDF
Improving generalization in backpropagation networks with distributed bottlenecks
- Computer Science
- International 1989 Joint Conference on Neural Networks
- 1989
- 23
Crossprop: Learning Representations by Stochastic Meta-Gradient Descent in Neural Networks
- Computer Science
- ECML/PKDD
- 2017
- 2
- PDF
A structural learning by adding independent noises to hidden units
- Computer Science
- Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
- 1994
- 8
Improved generalization of neural classifiers with enforced internal representation
- Mathematics, Computer Science
- Neurocomputing
- 2007
- 22
References
SHOWING 1-3 OF 3 REFERENCES
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
- Computer Science
- 1986
- 15,314
- PDF
Principles of Neurodynamics {Spartan
- 1961