Neurons learn by predicting future activity

@article{Luczak2020NeuronsLB,
  title={Neurons learn by predicting future activity},
  author={Artur Luczak and Bruce L. McNaughton and Yoshimasa Kubo},
  journal={Nature machine intelligence},
  year={2020},
  volume={4},
  pages={62 - 72}
}
Understanding how the brain learns may lead to machines with human-like intellectual capacities. It was previously proposed that the brain may operate on the principle of predictive coding. However, it is still not well understood how a predictive system could be implemented in the brain. Here we demonstrate that the ability of a single neuron to predict its future activity may provide an effective learning mechanism. Interestingly, this predictive learning rule can be derived from a metabolic… 

Neurons learn by predicting future activity

It is demonstrated that a single neuron predicts its future activity, and this predictive learning rule can be derived from a metabolic principle, whereby neurons need to minimize their own synaptic activity (cost) while maximizing their impact on local blood supply by recruiting other neurons.

Predictive Neuronal Adaptation as a Basis for Consciousness

A theoretical, computational, and experimental support is presented for the hypothesis that neuronal adaptation is a possible biological mechanism of conscious processing, and how this could provide a step toward a unified theory of consciousness is discussed.

Sequence anticipation and STDP emerge from a voltage-based predictive learning rule

A plasticity rule based on predictive processing is proposed, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential, which amplifies those synapses that maximally predict other synaptic inputs based on their temporal relations.

Learning rules for cortical-like spontaneous replay of an internal model

A synaptic plasticity mechanism is presented to generate cell assemblies encoding the statistical structure of salient sensory events and spontaneously replay these assemblies in spiking recurrent neural networks and replicates the behavioral biases of monkeys performing perceptual decision making with surprising accuracy.

Biologically-inspired neuronal adaptation improves learning in neural networks

This study augmented Contrastive Hebbian learning and EP with Adjusted Adaptation, inspired by the adaptation effect observed in neurons, and found that adaptation improved the performance of these networks.

Learning Cortical Hierarchies with Temporal Hebbian Updates

This work removes a key requirement of biologically plausible models for deep learning that does not align with plasticity rules observed in biology and proposes a learning mechanism that would explain how the timing of neuronal activity can allow supervised hierarchical learning.

Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules

It is demonstrated that state-of-the-art biologically-plausible learning rules for training RNNs exhibit worse and more variable generalization performance compared to their machine learning counterparts that follow the true gradient more closely, and a theorem is presented explaining this phenomenon.

Combining backpropagation with Equilibrium Propagation to improve an Actor-Critic reinforcement learning framework

This study proposes the first EP-based reinforcement learning architecture: an Actor-Critic architecture with the actor network trained by EP, and shows that this model can solve the basic control tasks often used as benchmarks for BP-based models.

Holomorphic Equilibrium Propagation Computes Exact Gradients Through Finite Size Oscillations

Analytical insights are provided that enable scaling EP to large-scale problems and a formal framework for how oscillations could support learning in biological and neuromorphic systems is established.

References

SHOWING 1-10 OF 119 REFERENCES

Neurons learn by predicting future activity

It is demonstrated that a single neuron predicts its future activity, and this predictive learning rule can be derived from a metabolic principle, whereby neurons need to minimize their own synaptic activity (cost) while maximizing their impact on local blood supply by recruiting other neurons.

Predictive Neuronal Adaptation as a Basis for Consciousness

A theoretical, computational, and experimental support is presented for the hypothesis that neuronal adaptation is a possible biological mechanism of conscious processing, and how this could provide a step toward a unified theory of consciousness is discussed.

Dendritic cortical microcircuits approximate the backpropagation algorithm

A novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem is introduced, in which error-driven synaptic plasticity adapts the network towards a global desired output.

Deep Predictive Learning: A Comprehensive Model of Three Visual Streams

This work presents a comprehensive framework spanning biological, computational, and cognitive levels, with a clear theoretical continuity between levels, providing a coherent answer directly supported by extensive data at each level.

Towards deep learning with segregated dendrites

It is shown that a deep learning algorithm that utilizes multi-compartment neurons might help to understand how the neocortex optimizes cost functions, and the algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning.

Unsupervised learning by competing hidden units

A learning algorithm is designed that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way, and which is motivated by Hebb’s idea that change of the synapse strength should be local.

Inferring learning rules from distribution of firing rates in cortical neurons

A method that allows one to infer the dependence of the presumptive learning rule on postsynaptic firing rate is introduced, and it is shown that the inferred learning rule exhibits depression for low post Synaptic rates and potentiation for high rates.

Performance-optimized hierarchical models predict neural responses in higher visual cortex

This work uses computational techniques to identify a high-performing neural network model that matches human performance on challenging object categorization tasks and shows that performance optimization—applied in a biologically appropriate model class—can be used to build quantitative predictive models of neural processing.

Learning to live with Dale’s principle: ANNs with separate excitatory and inhibitory units

It is demonstrated how ANNs that respect Dale’s principle can be built without sacrificing learning performance, which is important for future work using ANNs as models of the brain.

Random synaptic feedback weights support error backpropagation for deep learning

A surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights is presented, which can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks.
...