Uncertainty through Sampling: The Correspondence of Monte Carlo Dropout and Spiking in Artificial Neural Networks

@article{Standvoss2019UncertaintyTS,
  title={Uncertainty through Sampling: The Correspondence of Monte Carlo Dropout and Spiking in Artificial Neural Networks},
  author={Kai Standvoss and Lukas Gro{\ss}berger},
  journal={2019 Conference on Cognitive Computational Neuroscience},
  year={2019}
}
Any organism that senses its environment only has an incomplete and noisy perspective on the world, which creates a necessity for nervous systems to represent uncertainty. While the principles of encoding uncertainty in biological neural ensembles are still under investigation, deep learning became a popular and effective machine learning method. In these models, sampling through dropout has been proposed as a mechanism to encode uncertainty. Moreover, dropout has previously been linked to… 

Figures from this paper

References

SHOWING 1-10 OF 14 REFERENCES
Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons
TLDR
A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
TLDR
The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
Spiking Deep Networks with LIF Neurons
TLDR
This work demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire), and provides new methods for training deep networks to run on neuromorphic hardware.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Bayesian models: the structure of the world, uncertainty, behavior, and the brain
TLDR
The concept of graphical models is used to analyze differences and commonalities across Bayesian approaches to the modeling of behavioral and neural data and propose possible ways in which the brain can represent uncertainty.
Deep Neural Networks as Scientific Models
The dropout learning algorithm
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Understanding Measures of Uncertainty for Adversarial Example Detection
TLDR
This work highlights failure modes for MC dropout, a widely used approach for estimating uncertainty in deep models, and proposes a proposal to improve the quality of uncertainty estimates using probabilistic model ensembles.
...
...