Skip to search formSkip to main contentSkip to account menu

Backpropagation

Known as: Error back-propagation, Backpropogation, Back prop 
Backpropagation, an abbreviation for "backward propagation of errors", is a common method of training artificial neural networks used in conjunction… 
Wikipedia (opens in a new tab)

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2016
Highly Cited
2016
Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually… 
Highly Cited
2015
Highly Cited
2015
Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain… 
Highly Cited
2014
Highly Cited
2014
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed… 
Highly Cited
2000
Highly Cited
2000
The conventional wisdom is that backprop nets with excess hidden units generalize poorly. We show that nets with excess capacity… 
Highly Cited
1995
Highly Cited
1995
Contents: D.E. Rumelhart, R. Durbin, R. Golden, Y. Chauvin, Backpropagation: The Basic Theory. A. Waibel, T. Hanazawa, G. Hinton… 
Highly Cited
1993
Highly Cited
1993
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed. To overcome the inherent… 
Highly Cited
1992
Highly Cited
1992
  • D. Mackay
  • Neural Computation
  • 1992
  • Corpus ID: 16543854
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework… 
Highly Cited
1992
Highly Cited
1992
  • M. Gori, A. Tesi
  • IEEE Trans. Pattern Anal. Mach. Intell.
  • 1992
  • Corpus ID: 8098333
The authors propose a theoretical framework for backpropagation (BP) in order to identify some of its limitations as a general… 
Review
1990
Review
1990
Fundamental developments in feedforward artificial neural networks from the past thirty years are reviewed. The history… 
Highly Cited
1989
Highly Cited
1989
The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper…