Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions

@article{Testolin2016ProbabilisticMA,
  title={Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions},
  author={Alberto Testolin and Marco Zorzi},
  journal={Frontiers in Computational Neuroscience},
  year={2016},
  volume={10}
}
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here… 

Figures from this paper

A developmental approach for training deep belief networks

TLDR
iDBN, an iterative learning algorithm for DBNs that allows to jointly update the connection weights across all layers of the model, paves the way to the use of iDBN for modeling neurocognitive development.

The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

TLDR
It is argued that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning.

A brief review of connectionist models in contrast with modelling cognition

  • Helen Durgante
  • Psychology, Biology
    Sleep Medicine and Disorders: International Journal
  • 2019
TLDR
Cognitive models are usually able to predict and explain observable facts (functional or dysfunctional performances), in terms of generated probabilistic graphical models, with greater details of how brain processes actually occur.

The Anatomy of Inference: Generative Models and Brain Structure

TLDR
It is argued that the form of the generative models required for inference constrains the way in which brain regions connect to one another, and is illustrated in four different domains: perception, planning, attention, and movement.

Computational Neuropsychology and Bayesian Inference

TLDR
A narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks, to understand the link between biology and computation that is at the heart of neuropsychology.

Deep learning systems as complex networks

TLDR
This article proposes to study deep belief networks using techniques commonly employed in the study of complex networks, in order to gain some insights into the structural and functional properties of the computational graph resulting from the learning process.

Numerosity discrimination in deep neural networks: Initial competence, developmental refinement and experience statistics.

TLDR
The findings suggest that it may not be necessary to assume that animals are endowed with a dedicated system for processing numerosity, since domain-general learning mechanisms can capture key characteristics others have attributed to an evolutionarily specialized number system.

Learning Numerosity Representations with Transformers

TLDR
It is shown that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution.

Learning Numerosity Representations with Transformers: Number Generation Tasks and Out-of-Distribution Generalization

TLDR
It is shown that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution.

Computational psychiatry: from synapses to sentience

This review considers computational psychiatry from a particular viewpoint: namely, a commitment to explaining psychopathology in terms of pathophysiology. It rests on the notion of a generative

References

SHOWING 1-10 OF 110 REFERENCES

Modeling language and cognition with deep unsupervised learning: a tutorial overview

TLDR
It is argued that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.

Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?

TLDR
It is shown that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations.

Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists

TLDR
It is shown how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python).

Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

TLDR
Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization and can be scaled up to neural emulations of probabilistic inference in fairly large graphical models.

Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review

TLDR
It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation.

Bayesian Computation in Recurrent Neural Circuits

TLDR
It is shown that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model, and a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world is introduced.

Learning Orthographic Structure With Sequential Generative Neural Networks

TLDR
This work investigates a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations.

Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

TLDR
A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time.

The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields

TLDR
It is argued that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences.

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

TLDR
The results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes.
...