• Corpus ID: 235436025

Credit Assignment in Neural Networks through Deep Feedback Control

@inproceedings{Meulemans2021CreditAI,
  title={Credit Assignment in Neural Networks through Deep Feedback Control},
  author={Alexander Meulemans and Matilde Tristany Farinha and Javier Garc'ia Ord'onez and Pau Vilimelis Aceituno and Jo{\~a}o Sacramento and Benjamin F. Grewe},
  booktitle={NeurIPS},
  year={2021}
}
The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologicallyplausible learning methods are either non-local in time, require highly specific connectivity motifs, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses… 

Figures and Tables from this paper

Minimizing Control for Credit Assignment with Strong Feedback
TLDR
This work presents a fundamentally novel view of learning as control minimization, while sidestep-ping biologically unrealistic assumptions for gradient-based learning in deep neural networks.
Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules
TLDR
This analysis is the first to identify the reason for this generalization gap between artificial and biologically-plausible learning rules, which can help guide future investigations into how the brain learns solutions that generalize.
Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations
TLDR
This work shows that a network trained on a series of tasks using traditional weight updates can infer tasks dynamically using gradient descent steps in the latent task embedding space (latent updates), and introduces Thalamus, a task-agnostic algorithm capable of discovering disentangled representations in a stream of unlabeled tasks using simple gradient descent.
Target Propagation via Regularized Inversion
TLDR
A simple version of target propagation based on a regularized inversion of network layers, easily implementable in a differentiable programming framework and delineate the regimes in which TP can be attractive compared to BP is presented.

References

SHOWING 1-10 OF 84 REFERENCES
Learning to solve the credit assignment problem
TLDR
A hybrid learning approach that learns to approximate the gradient, and can match or the performance of exact gradient-based learning in both feedforward and convolutional networks.
Biological credit assignment through dynamic inversion of feedforward networks
TLDR
This work shows that feedforward network transformations can be effectively inverted through dynamics, and derives this dynamic inversion from the perspective of feedback control, where the forward transformation is reused and dynamically interacts with fixed or random feedback to propagate error signals during the backward pass.
Two Routes to Scalable Credit Assignment without Weight Symmetry
TLDR
This work investigates a recently proposed local learning rule that yields competitive performance with backpropagation and finds that it is highly sensitive to metaparameter choices, requiring laborious tuning that does not transfer across network architecture and investigates several non-local learning rules that relax the need for instantaneous weight transport into a more biologically-plausible "weight estimation" process.
Towards Biologically Plausible Deep Learning
TLDR
The theory about the probabilistic interpretation of auto-encoders is extended to justify improved sampling schemes based on the generative interpretation of denoising auto- Encoder, and these ideas are validated on generative learning tasks.
Dendritic solutions to the credit assignment problem
Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning
TLDR
Error Forward-Propagation is a plausible basis for how error feedback occurs deep in the brain independent of and yet in support of the functionality underlying intricate network architectures.
A Theoretical Framework for Target Propagation
TLDR
This work analyzes target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization and shows that TP is closely related to Gauss-Newton optimization and thus substantially differs from BP.
Learning arbitrary dynamics in efficient, balanced spiking networks using local plasticity rules
TLDR
The theory of efficient, balanced spiking networks (EBN) with nonlinear adaptive control theory is fused, resulting in a synaptic plasticity rule depending solely on presynaptic inputs and post-synaptic feedback that can learn to implement complex dynamics with very small numbers of neurons and spikes, and are extremely robust to noise and neuronal loss.
Enforcing balance allows local supervised learning in spiking recurrent networks
TLDR
This work shows how networks of integrate-and-fire neurons can learn arbitrary linear dynamical systems by feeding back their error as a feed-forward input, demonstrating that spiking networks can learn complex dynamics using purely local learning rules, using E/I balance as the key rather than an additional constraint.
The Brain as an Efficient and Robust Adaptive Learner
...
...