Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm

@article{OReilly1996BiologicallyPE,
  title={Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm},
  author={Randall C. O’Reilly},
  journal={Neural Computation},
  year={1996},
  volume={8},
  pages={895-938}
}
  • R. O’Reilly
  • Published 1 July 1996
  • Computer Science
  • Neural Computation
The error backpropagation learning algorithm (BP) is generally considered biologically implausible because it does not use locally available, activation-based variables. A version of BP that can be computed locally using bidirectional activation recirculation (Hinton and McClelland 1988) instead of backpropagated error derivatives is more biologically plausible. This paper presents a generalized version of the recirculation algorithm (GeneRec), which overcomes several limitations of the earlier… 

Generalization in Interactive Networks: The Benefits of Inhibitory Competition and Hebbian Learning

TLDR
Simulations using the Leabra algorithm show that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.

Contrastive Hebbian Feedforward Learning for Neural Networks

  • N. Kermiche
  • Computer Science
    IEEE Transactions on Neural Networks and Learning Systems
  • 2020
TLDR
CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and steer them away from undesirable outcomes without any need for the specialized feedback circuit of BP or the symmetric connections used by the Boltzmann machines.

Can the Brain Do Backpropagation? - Exact Implementation of Backpropagation in Predictive Coding Networks

TLDR
A BL model is proposed that produces exactly the same updates of the neural weights as BP, while employing local plasticity, i.e., all neurons perform only local computations, done simultaneously and is modified to modify it to an alternative BL model that works fully autonomously.

Dendritic cortical microcircuits approximate the backpropagation algorithm

TLDR
A novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem is introduced, in which error-driven synaptic plasticity adapts the network towards a global desired output.

An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity

TLDR
It is shown that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity.

An Alternative to Backpropagation in Deep Reinforcement Learning

TLDR
An algorithm called MAP propagation is proposed that can reduce this variance significantly while retaining the local property of learning rule and can solve common reinforcement learning tasks at a speed similar to that of backpropagation when applied to an actor-critic network.

Generalization of Equilibrium Propagation to Vector Field Dynamics

TLDR
This work presents a simple two-phase learning procedure for fixed point recurrent networks that generalizes Equilibrium Propagation to vector field dynamics, relaxing the requirement of an energy function.

Reverse Differentiation via Predictive Coding

TLDR
This work generalizes (PC and) Z-IL by directly defining it on computational graphs, and shows that it can perform exact reverse differentiation, which results in the first PC algorithm that is equivalent to BP in the way of updating parameters on any neural network.

GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

TLDR
This work derives an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass and gives identical updates when synaptic weight matrices are orthogonal.

Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural Networks

TLDR
This work proposes a biologically plausible paradigm of neural architecture based on related literature in neuroscience and asymmetric BP-like methods with trainable feedforward and feedback weights and results show that these models outperform other asymmetrical BP- like methods on the MNIST and the CIFAR-10 datasets.
...

References

SHOWING 1-10 OF 79 REFERENCES

Deterministic Boltzmann Learning in Networks with Asymmetric Connectivity

Deterministic Boltzmann Learning Performs Steepest Descent in Weight-Space

TLDR
By using the appropriate interpretation for the way in which a DBM represents the probability of an output vector given an input vector, it is shown that the DBM performs steepest descent in the same function as the original SBM, except at rare discontinuities.

A more biologically plausible learning rule for neural networks.

TLDR
A more biologically plausible learning rule is described, using reinforcement learning, which is applied to the problem of how area 7a in the posterior parietal cortex of monkeys might represent visual space in head-centered coordinates and shows that a neural network does not require back propagation to acquire biologically interesting properties.

Generalization of Back propagation to Recurrent and Higher Order Neural Networks

TLDR
A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks and to a constrained dynamical system for training a content addressable memory.

Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network

TLDR
A local synaptic Learning rule is described that performs stochastic gradient ascent in this information-theoretic quantity, for the case in which the input-output mapping is linear and the input signal and noise are multivariate gaussian.

Contrastive Hebbian Learning in the Continuous Hopfield Model

Learning Representations by Recirculation

TLDR
Simulations in simple networks show that the learning procedure usually converges rapidly on a good set of codes, and analysis shows that in certain restricted cases it performs gradient descent in the squared reconstruction error.

Neurons with graded response have collective computational properties like those of two-state neurons.

  • J. Hopfield
  • Biology
    Proceedings of the National Academy of Sciences of the United States of America
  • 1984
TLDR
A model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied and collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons are studied.

Mean Field Theory Neural Networks for Feature Recognition, Content Addressable Memory and Optimization

TLDR
Using the mean field theory technique in the context of the Boltzmann machine gives rise to a fast deterministic learning algorithm with a performance comparable with that of the backpropagation algorithm in feature recognition applications.

Dynamics and architecture for neural computation

...