Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions

@article{Testolin2016ProbabilisticMA,
  title={Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions},
  author={Alberto Testolin and Marco Zorzi},
  journal={Frontiers in Computational Neuroscience},
  year={2016},
  volume={10}
}
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here… Expand
The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding
TLDR
It is argued that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. Expand
A brief review of connectionist models in contrast with modelling cognition
  • H. Durgante
  • Psychology
  • Sleep Medicine and Disorders: International Journal
  • 2019
Models of cognitive processes have recently been applied in neuropsychological investigations in order to provide more concrete evidences of human brain functioning. A model may be described as theExpand
The Anatomy of Inference: Generative Models and Brain Structure
TLDR
It is argued that the form of the generative models required for inference constrains the way in which brain regions connect to one another, and is illustrated in four different domains: perception, planning, attention, and movement. Expand
Computational Neuropsychology and Bayesian Inference
TLDR
A narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks, to understand the link between biology and computation that is at the heart of neuropsychology. Expand
Deep learning systems as complex networks
TLDR
This article proposes to study deep belief networks using techniques commonly employed in the study of complex networks, in order to gain some insights into the structural and functional properties of the computational graph resulting from the learning process. Expand
The Challenge of Modeling the Acquisition of Mathematical Concepts
TLDR
This paper argues that computer simulation should have a primary role in filling this gap because it allows identifying the finer-grained computational mechanisms underlying complex behavior and cognition and discusses promising directions to push deep learning into uncharted territory. Expand
Numerosity discrimination in deep neural networks: Initial competence, developmental refinement and experience statistics.
TLDR
The findings suggest that it may not be necessary to assume that animals are endowed with a dedicated system for processing numerosity, since domain-general learning mechanisms can capture key characteristics others have attributed to an evolutionarily specialized number system. Expand
Learning Numerosity Representations with Transformers: Number Generation Tasks and Out-of-Distribution Generalization
TLDR
It is shown that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution. Expand
An emergentist perspective on the origin of number sense
  • M. Zorzi, Alberto Testolin
  • Psychology, Medicine
  • Philosophical Transactions of the Royal Society B: Biological Sciences
  • 2018
TLDR
It is shown that deep neural networks endowed with basic visuospatial processing exhibit a remarkable performance in numerosity discrimination before any experience-dependent learning, whereas unsupervised sensory experience with visual sets leads to subsequent improvement of number acuity and reduces the influence of continuous visual cues. Expand
Degeneracy and Redundancy in Active Inference
TLDR
This paper offers a principled account of degeneracy and redundancy, when function is operationalized in terms of active inference, namely, a formulation of perception and action as belief updating under generative models of the world. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 133 REFERENCES
Modeling language and cognition with deep unsupervised learning: a tutorial overview
TLDR
It is argued that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition. Expand
Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?
TLDR
It is shown that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. Expand
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists
TLDR
It is shown how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). Expand
Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons
TLDR
Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization and can be scaled up to neural emulations of probabilistic inference in fairly large graphical models. Expand
Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review
TLDR
It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Expand
Bayesian Computation in Recurrent Neural Circuits
TLDR
It is shown that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model, and a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world is introduced. Expand
Learning Orthographic Structure With Sequential Generative Neural Networks
TLDR
This work investigates a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. Expand
Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons
TLDR
A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. Expand
The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields
TLDR
It is argued that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences. Expand
Cognitive Network Neuroscience
TLDR
The methodology of network science as applied to the particular case of neuroimaging data is described and its uses in investigating a range of cognitive functions including sensory processing, language, emotion, attention, cognitive control, learning, and memory are reviewed. Expand
...
1
2
3
4
5
...