Efficient and robust multi-task learning in the brain with modular latent primitives
@inproceedings{Marton2021EfficientAR, title={Efficient and robust multi-task learning in the brain with modular latent primitives}, author={Christian David M'arton and L'eo Gagnon and Guillaume Lajoie and Kanaka Rajan}, year={2021} }
Biological agents do not have infinite resources to learn new things. For this reason, a central aspect of human learning is the ability to recycle previously acquired knowledge in a way that allows for faster, less resource-intensive acquisition of new skills. In spite of that, how neural networks in the brain leverage existing knowledge to learn new computations is not well understood. In this work, we study this question in artificial recurrent neural networks (RNNs) trained on a corpus of…
References
SHOWING 1-10 OF 80 REFERENCES
Organizing recurrent network dynamics by task-computation to enable continual learning
- Computer ScienceNeurIPS
- 2020
A novel learning rule is developed designed to minimize interference between sequentially learned tasks in recurrent networks and it is shown that networks trained using this approach can reuse similar dynamical structures across similar tasks.
Task representations in neural networks trained to perform many cognitive tasks
- Psychology, BiologyNature Neuroscience
- 2019
It is found that after training, recurrent units can develop into clusters that are functionally specialized for different cognitive processes, and a simple yet effective measure is introduced to quantify relationships between single-unit neural representations of tasks.
Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies
- Chemistry
- 2001
D3EGF(FIH)J KMLONPEGQSRPETN UCV.WYX(Z R.[ V R6\M[ X N@]_^O\`JaNcb V RcQ W d EGKeL(^(QgfhKeLOE?i)^(QSj ETNPfPQkRl[ V R)m"[ X ^(KeLOEG^ npo qarpo m"[ X ^(KeLOEG^tsAu EGNPb V ^ v wyx…
A deep learning framework for neuroscience
- Computer ScienceNature Neuroscience
- 2019
It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.
Computing by Robust Transience: How the Fronto-Parietal Network Performs Sequential, Category-Based Decisions
- Computer ScienceNeuron
- 2017
Maslow's Hammer for Catastrophic Forgetting: Node Re-Use vs Node Activation
- Computer ScienceICML
- 2022
This paper theoretically analyse both a synthetic teacher-student framework and a real data setup to provide an explanation of the presence of a trade-off between node activation and node re-use that results in worst forgetting in the intermediate regime.
The role of population structure in computations through neural dynamics
- Computer Science, BiologyNature neuroscience
- 2022
It is shown that the dimensionality of the dynamics and subpopulation structure play fundamentally complementary roles in neural computations and this results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
Rich and lazy learning of task representations in brains and neural networks
- Computer Science, BiologybioRxiv
- 2021
Evidence is reported for neural coding patterns in biological brains whose dimensionality and neural geometry are consistent with the rich learning regime, using behavioural testing and neuroimaging in humans and analysis of neural signals from macaque prefrontal cortex.
Inferring brain-wide interactions using data-constrained recurrent neural network models
- Biology, Computer SciencebioRxiv
- 2020
Current-Based Decomposition (CURBD) is introduced, an approach for inferring brain-wide interactions using data-constrained recurrent neural network models that directly reproduce experimentally-obtained neural data and leverages the functional interactions inferred by such models to reveal directional currents between multiple brain regions.