Corpus ID: 236428670

Probabilistic Trust Intervals for Out of Distribution Detection

@inproceedings{Singh2021ProbabilisticTI,
  title={Probabilistic Trust Intervals for Out of Distribution Detection},
  author={Gagandeep Singh and Deepak Mishra},
  year={2021}
}
  • Gagandeep Singh, Deepak Mishra
  • Published 2 February 2021
  • Computer Science
Building neural network classifiers with an ability to distinguish between in and out-of distribution inputs is an important step towards faithful deep learning systems. Some of the successful approaches for this, resort to architectural novelties, such as ensembles, with increased complexities in terms of the number of parameters and training procedures. Whereas some other approaches make use of surrogate samples, which are easy to create and work as proxies for actual out-of-distribution (OOD… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 35 REFERENCES
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets. Expand
Likelihood Ratios for Out-of-Distribution Detection
TLDR
This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics. Expand
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
TLDR
This paper proposes a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier, and obtains the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis. Expand
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin. Expand
A Scalable Laplace Approximation for Neural Networks
TLDR
This work uses recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network, enabling practitioners to estimate the uncertainty of models currently used in production without having to retrain them. Expand
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Expand
On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks
TLDR
DNNs trained with mixup are significantly better calibrated and are less prone to over-confident predictions on out-of-distribution and random-noise data, suggesting that mixup be employed for classification tasks where predictive uncertainty is a significant concern. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Deep Anomaly Detection with Outlier Exposure
TLDR
In extensive experiments on natural language processing and small- and large-scale vision tasks, it is found that Outlier Exposure significantly improves detection performance and that cutting-edge generative models trained on CIFar-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; OE is used to mitigate this issue. Expand
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
TLDR
A simple baseline that utilizes probabilities from softmax distributions is presented, showing the effectiveness of this baseline across all computer vision, natural language processing, and automatic speech recognition, and it is shown the baseline can sometimes be surpassed. Expand
...
1
2
3
4
...