Toward an Integration of Deep Learning and Neuroscience

@article{Marblestone2016TowardAI,
  title={Toward an Integration of Deep Learning and Neuroscience},
  author={Adam H. Marblestone and Greg Wayne and Konrad Paul K{\"o}rding},
  journal={Frontiers in Computational Neuroscience},
  year={2016},
  volume={10}
}
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives… Expand
A deep learning framework for neuroscience
TLDR
It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation. Expand
Deep Neural Networks in Computational Neuroscience
TLDR
In addition to their ability to model complex intelligent behaviours, DNNs have been shown to predict neural responses to novel sensory stimuli that cannot be predicted with any other currently available type of model. Expand
Computational Principles of Supervised Learning in the Cerebellum.
TLDR
The principles emerging from studies of the cerebellum have striking parallels with those in other brain areas and in artificial neural networks, as well as some notable differences, which can inform future research on supervised learning and inspire next-generation machine-based algorithms. Expand
Crossing the Cleft: Communication Challenges Between Neuroscience and Artificial Intelligence
TLDR
Cultural differences between the two fields are discussed, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI and small but significant cultural shifts that would greatly facilitate increased synergy between theTwo fields are highlighted. Expand
Network Design and the Brain
TLDR
By thinking algorithmically about the goals, constraints, and optimization principles used by neural circuits, this work can develop brain-derived strategies for enhancing network design, while also stimulating experimental hypotheses about circuit development and function. Expand
Biological constraints on neural network models of cognitive function.
TLDR
Recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions are highlighted. Expand
Lessons From Deep Neural Networks for Studying the Coding Principles of Biological Neural Networks
TLDR
This study aims to highlight the importance of careful assumptions and interpretations regarding the neural response to stimulus features and suggest that the comparative study between deep and biological neural networks from the perspective of machine learning can be an effective strategy for understanding the coding principles of the brain. Expand
A Biologically Plausible Learning Rule for Deep Learning in the Brain
TLDR
It is demonstrated that AGREL and AuGMEnT generalize to deep networks, if they include an attention network that propagates information about the selected action to lower network levels, and the results provide new insights into how deep learning can be implemented in the brain. Expand
Overleaf Example
The brain is the perfect place to look for inspiration to develop more efficient neural networks. The inner workings of our synapses and neurons provide a glimpse at what the future of deep learningExpand
Training Spiking Neural Networks Using Lessons From Deep Learning
  • J. Eshraghian, Max Ward, +6 authors Wei D. Lu
  • Computer Science
  • ArXiv
  • 2021
TLDR
The delicate interplay between encoding data as spikes and the learning process; the challenges and solutions of applying gradient-based learning to spiking neural networks; the subtle link between temporal backpropagation and spike timing dependent plasticity; and how deep learning might move towards biologically plausible online learning are explored. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 742 REFERENCES
Towards Biologically Plausible Deep Learning
TLDR
The theory about the probabilistic interpretation of auto-encoders is extended to justify improved sampling schemes based on the generative interpretation of denoising auto- Encoder, and these ideas are validated on generative learning tasks. Expand
Learning Through Time in the Thalamocortical Loops
TLDR
An implemented model showing how predictive learning of tumbling object trajectories can facilitate object recognition with cluttered backgrounds is described, and it is argued that this discretization of temporal context updating has a number of important computational and functional advantages. Expand
A Computational Model of the Cerebral Cortex
TLDR
The existence of an agreed-upon cortical substrate would not only facilitate the understanding of the brain but enable researchers to combine lessons learned from biology with state-of-the-art graphical-model and machine-learning techniques to design hybrid systems that combine the best of biological and traditional computing approaches. Expand
Searching for principles of brain computation
  • W. Maass
  • Computer Science
  • Current Opinion in Behavioral Sciences
  • 2016
TLDR
This short article will discuss in this short article four constraints: inherent recurrent network activity and heterogeneous dynamic properties of neurons and synapses, stereotypical spatio-temporal activity patterns in networks of neurons, high trial-to-trial variability of network responses, and functional stability in spite of permanently ongoing changes in the network. Expand
Random feedback weights support learning in deep neural networks
TLDR
A surprisingly simple algorithm is presented, which assigns blame by multiplying error signals by random synaptic weights, and it is shown that a network can learn to extract useful information from signals sent through these random feedback connections, in essence, the network learns to learn. Expand
Twenty-Five Lessons from Computational Neuromodulation
TLDR
A computationally focused review of algorithmic and implementational motifs associated with neuromodulators, using decision making in the face of uncertainty as a running example. Expand
Supervised and Unsupervised Learning with Two Sites of Synaptic Integration
TLDR
Compared to standard, one-integration-site neurons, it is possible to incorporate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity, thanks to recent research on the properties of cortical pyramidal neurons. Expand
Neural Networks and Neuroscience-Inspired Computer Vision
TLDR
The historical connections between neuroscience and computer science are reviewed, and a new era of potential collaboration is looked forward to, enabled by recent rapid advances in both biologically-inspired computer vision and in experimental neuroscience methods. Expand
Toward the neural implementation of structure learning
TLDR
Recent advances in developing computational frameworks that can support efficient structure learning and inductive inference may provide insight into the underlying component processes and help pave the path for uncovering their neural implementation. Expand
The atoms of neural computation
TLDR
The search for a single canonical cortical circuit, characterized as a kind of a “nonlinear spatiotemporal filter with adaptive properties”, is misguided and there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals. Expand
...
1
2
3
4
5
...