• Corpus ID: 220347491

Qualitative Analysis of Monte Carlo Dropout

@article{Seoh2020QualitativeAO,
  title={Qualitative Analysis of Monte Carlo Dropout},
  author={Ronald Seoh},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.01720}
}
In this report, we present qualitative analysis of Monte Carlo (MC) dropout method for measuring model uncertainty in neural network (NN) models. We first consider the sources of uncertainty in NNs, and briefly review Bayesian Neural Networks (BNN), the group of Bayesian approaches to tackle uncertainties in NNs. After presenting mathematical formulation of MC dropout, we proceed to suggesting potential benefits and associated costs for using MC dropout in typical NN models, with the results… 
Controlled Dropout for Uncertainty Estimation
TLDR
This study presents a new version of the traditional dropout layer where each layer can take and apply the new drop out layer in the MC method to quantify the uncertainty associated with NN predictions.
Confidence Aware Neural Networks for Skin Cancer Detection
TLDR
This work presents three different methods for quantifying uncertainties for skin cancer detection from images and comprehensively evaluates and compares performance of these DNNs using novel uncertainty-related metrics.
Prognostics for Electromagnetic Relays Using Deep Learning
TLDR
A deep learning pipeline is presented in a prognostic context termed Electromagnetic Relay Useful Actuation Pipeline (EMRUA), which achieves an average forecasting mean absolute percentage error over the course of the entire EMR life.
Recent advances and applications of deep learning methods in materials science
TLDR
A high-level overview of deep learning methods followed by a detailed discussion of recent developments ofdeep learning in atomistic simulation, materials imaging, spectral analysis, and natural language processing is presented.

References

SHOWING 1-10 OF 22 REFERENCES
Markov Chain Monte Carlo and Variational Inference: Bridging the Gap
TLDR
A new synthesis of variational inference and Monte Carlo methods where one or more steps of MCMC is incorporated into the authors' variational approximation, resulting in a rich class of inference algorithms bridging the gap between variational methods and MCMC.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Variational Inference: A Review for Statisticians
TLDR
Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.
Dropout as a Bayesian Approximation: Appendix
We show that a neural network with arbitrary depth and non-linearities, with dropout applied before every weight layer, is mathematically equivalent to an approximation to a well known Bayesian
Bayesian Learning for Neural Networks
TLDR
Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional neural network learning methods.
Single-Model Uncertainties for Deep Learning
TLDR
This work proposes Simultaneous Quantile Regression (SQR), a loss function to learn all the conditional quantiles of a given target variable, which can be used to compute well-calibrated prediction intervals.
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
TLDR
This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients.
Priors for Infinite Networks
In this chapter, I show that priors over network parameters can be defined in such a way that the corresponding priors over functions computed by the network reach reasonable limits as the number of
Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference
TLDR
This work presents an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches, and approximate the model's intractable posterior with Bernoulli variational distributions.
...
...