Corpus ID: 235446316

Local plasticity rules can learn deep representations using self-supervised contrastive predictions

@inproceedings{Illing2020LocalPR,
  title={Local plasticity rules can learn deep representations using self-supervised contrastive predictions},
  author={Bernd Illing and Jean Ventura and Guillaume Bellec and Wulfram Gerstner},
  year={2020}
}
Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown. Here, we propose a learning rule that takes inspiration from neuroscience and recent advances in self-supervised deep learning. Learning minimizes a simple layerspecific loss function and does not need to back-propagate error signals within or between layers. Instead, weight updates follow a local, Hebbian, learning rule that only… Expand

References

SHOWING 1-10 OF 59 REFERENCES
LoCo: Local Contrastive Representation Learning
TLDR
By overlapping local blocks stacking on top of each other, this work effectively increases the decoder depth and allow upper blocks to implicitly send feedbacks to lower blocks, which closes the performance gap between local learning and end-to-end contrastive learning algorithms for the first time. Expand
Dendritic cortical microcircuits approximate the backpropagation algorithm
TLDR
A novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem is introduced, in which error-driven synaptic plasticity adapts the network towards a global desired output. Expand
A solution to the learning dilemma for recurrent networks of spiking neurons
TLDR
A new mathematical insight tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. Expand
Random synaptic feedback weights support error backpropagation for deep learning
TLDR
A surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights is presented, which can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Expand
Backpropagation and the brain
TLDR
It is argued that the key principles underlying backprop may indeed have a role in brain function and induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain. Expand
Spike timing-dependent plasticity: a Hebbian learning rule.
TLDR
This work has examined the functional consequences of STDP directly in an increasing number of neural circuits in vivo, and revealed several layers of complexity in STDP, including its dependence on dendritic location, the nonlinear integration of synaptic modification induced by complex spike trains, and the modulation ofSTDP by inhibitory and neuromodulatory inputs. Expand
Putting An End to End-to-End: Gradient-Isolated Learning of Representations
TLDR
A novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead is proposed, allowing large-scale distributed training of very deep neural networks on unlabelled datasets. Expand
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
TLDR
It is shown that if synaptic plasticity is regulated by high-frequency bursts of spikes, then neurons higher in a hierarchical circuit can coordinate the plasticity of lower-level connections. Expand
Sparse Coding via Thresholding and Local Competition in Neural Circuits
TLDR
A locally competitive algorithm (LCA) is described that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function to produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation. Expand
Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules
TLDR
Four key experiments are reviewed that support the role of synaptic eligibility traces in combination with a third factor as a biological implementation of neoHebbian three-factor learning rules. Expand
...
1
2
3
4
5
...