• Corpus ID: 7238514

Converting Cascade-Correlation Neural Nets into Probabilistic Generative Models

@article{Nobandegani2017ConvertingCN,
  title={Converting Cascade-Correlation Neural Nets into Probabilistic Generative Models},
  author={Ardavan Salehi Nobandegani and Thomas R. Shultz},
  journal={ArXiv},
  year={2017},
  volume={abs/1701.05004}
}
Humans are not only adept in recognizing what class an input instance belongs to (i.e., classification task), but perhaps more remarkably, they can imagine (i.e., generate) plausible instances of a desired class with ease, when prompted. Inspired by this, we propose a framework which allows transforming Cascade-Correlation Neural Networks (CCNNs) into probabilistic generative models, thereby enabling CCNNs to generate samples from a category of interest. CCNNs are a well-known class of… 

Figures from this paper

A Resource-Rational Process-Level Account of the St. Petersburg Paradox
TLDR
This work shows that Nobandegani et al.'s (2018) metacognitively rational model, sample-based expected utility (SbEU), can account for major experimental findings on this paradox and presents the first resource-rational, process-level explanation of this paradox.
A computational model of infant learning and reasoning with probabilities.
TLDR
A novel computational system called Neural Probability Learner and Sampler (NPLS) that learns and reasons with probabilities, providing a computationally sufficient mechanism to explain infant probabilistic learning and inference.

References

SHOWING 1-10 OF 33 REFERENCES
Reducing Network Depth in the Cascade-Correlation Learning Architecture,
TLDR
This paper investigates a simple variation of Cascade-Correlation that will build deep nets if necessary, but that is biased toward minimizing network depth, and demonstrates empirically, across a range of problems, that this simple technique can reduce network depth.
Knowledge-based cascade-correlation: Using knowledge to speed learning
TLDR
A new extension of a well-known generative algorithm, cascade-correlation, that recruits previously learned sub-networks as well as single hidden units and is observed to find, adapt and use its relevant knowledge to speed learning significantly.
Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons
TLDR
A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time.
The Cascade-Correlation Learning Architecture
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons
TLDR
Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization and can be scaled up to neural emulations of probabilistic inference in fairly large graphical models.
A Tutorial on Energy-Based Learning
TLDR
The EBM approach provides a common theoretical framework for many learning models, including traditional discr iminative and generative approaches, as well as graph-transformer networks, co nditional random fields, maximum margin Markov networks, and several manifold learning methods.
Multistability and Perceptual Inference
TLDR
It is argued that the visual system approximates the posterior over underlying causes with a set of samples and that this approximation strategy produces perceptual multistability—stochastic alternation between percepts in consciousness.
Rational approximations to rational models: alternative algorithms for category learning.
TLDR
It is argued that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes and is proposed that a particle filter with a single particle provides a good description of human inferences.
Neural networks discover a near-identity relation to distinguish simple syntactic forms
TLDR
Computer simulations show that an unstructured neural-network model covers the essential features of infant learning of simple grammars in an artificial language and uses this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar.
...
...