Toward an Integration of Deep Learning and Neuroscience

@article{Marblestone2016TowardAI,
  title={Toward an Integration of Deep Learning and Neuroscience},
  author={Adam H. Marblestone and Greg Wayne and Konrad Paul Kording},
  journal={Frontiers in Computational Neuroscience},
  year={2016},
  volume={10}
}
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives… 

Figures from this paper

A deep learning framework for neuroscience

It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.

Relaxing the Constraints on Predictive Coding Models

This work shows that standard implementations of predictive coding still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity that can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.

How to incorporate biological insights into network models and why it matters.

It is argued that building biologically realistic network models is crucial to establishing causal relationships between neurons, synapses, circuits, and behavior and advocated for network models that consider the connectivity structure and the recorded activity dynamics while evaluating task performance.

Computational Principles of Supervised Learning in the Cerebellum.

The principles emerging from studies of the cerebellum have striking parallels with those in other brain areas and in artificial neural networks, as well as some notable differences, which can inform future research on supervised learning and inspire next-generation machine-based algorithms.

Neuroprospecting with DeepRL agents

Some of the current technological and algorithmic challenges in this emerging niche that AI researchers could help address are described, and some potential opportunities for cross-pollination with AI are highlighted.

Crossing the Cleft: Communication Challenges Between Neuroscience and Artificial Intelligence

Cultural differences between the two fields are discussed, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI and small but significant cultural shifts that would greatly facilitate increased synergy between theTwo fields are highlighted.

Network Design and the Brain

Biological constraints on neural network models of cognitive function.

Recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions are highlighted.

Deep Neural Networks in Computational Neuroscience

Deep neural networks represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

A Biologically Plausible Learning Rule for Deep Learning in the Brain

It is demonstrated that AGREL and AuGMEnT generalize to deep networks, if they include an attention network that propagates information about the selected action to lower network levels, and the results provide new insights into how deep learning can be implemented in the brain.
...

References

SHOWING 1-10 OF 548 REFERENCES

Learning Through Time in the Thalamocortical Loops

An implemented model showing how predictive learning of tumbling object trajectories can facilitate object recognition with cluttered backgrounds is described, and it is argued that this discretization of temporal context updating has a number of important computational and functional advantages.

A Computational Model of the Cerebral Cortex

The existence of an agreed-upon cortical substrate would not only facilitate the understanding of the brain but enable researchers to combine lessons learned from biology with state-of-the-art graphical-model and machine-learning techniques to design hybrid systems that combine the best of biological and traditional computing approaches.

Random feedback weights support learning in deep neural networks

A surprisingly simple algorithm is presented, which assigns blame by multiplying error signals by random synaptic weights, and it is shown that a network can learn to extract useful information from signals sent through these random feedback connections, in essence, the network learns to learn.

Searching for principles of brain computation

  • W. Maass
  • Biology, Psychology
    Current Opinion in Behavioral Sciences
  • 2016

Twenty-Five Lessons from Computational Neuromodulation

Supervised and Unsupervised Learning with Two Sites of Synaptic Integration

Compared to standard, one-integration-site neurons, it is possible to incorporate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity, thanks to recent research on the properties of cortical pyramidal neurons.

Neural Networks and Neuroscience-Inspired Computer Vision

Toward the neural implementation of structure learning

The atoms of neural computation

The search for a single canonical cortical circuit, characterized as a kind of a “nonlinear spatiotemporal filter with adaptive properties”, is misguided and there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals.

Approximate Hubel-Wiesel Modules and the Data Structures of Neural Computation

A framework for modeling the interface between perception and memory on the algorithmic level of analysis is described, based on a novel interpretation of Hubel and Wiesel's conjecture for how receptive fields tuned to complex objects, and invariant to details, could be achieved.
...