A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
@article{Hendrycks2017ABF, title={A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks}, author={Dan Hendrycks and Kevin Gimpel}, journal={ArXiv}, year={2017}, volume={abs/1610.02136} }
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. [] Key Result We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
Figures and Tables from this paper
1,326 Citations
Learning Confidence for Out-of-Distribution Detection in Neural Networks
- Computer ScienceArXiv
- 2018
This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors.
Principled Detection of Out-of-Distribution Examples in Neural Networks
- Computer ScienceArXiv
- 2017
ODIN is proposed, a simple and effective out-of-distribution detector for neural networks, that does not require any change to a pre-trained model, and is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in and out of distribution samples, allowing for more effective detection.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
- Computer ScienceICLR
- 2018
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.
Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices
- Computer ScienceArXiv
- 2019
It is found that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates, and this method generally performs better than or equal to state-of-the-art Ood detection methods.
OUT-OF-DISTRIBUTION DETECTION USING DEEP NEURAL NETWORKS
- Computer Science
- 2019
This work proposes a methodology for training a neural network that allows it to efficiently detect outof-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
- Computer ScienceICLR
- 2018
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.
Out-of-distribution Detection in Classifiers via Generation
- Computer ScienceArXiv
- 2019
A novel algorithm to generate out-of-distribution samples using a manifold learning network and then train an n+1 classifier for OOD detection, where the $n+1^{th}$ class represents the OOD samples is proposed.
Contrastive Training for Improved Out-of-Distribution Detection
- Computer ScienceArXiv
- 2020
This paper proposes and investigates the use of contrastive training to boost OOD detection performance, and introduces and employs the Confusion Log Probability (CLP) score, which quantifies the difficulty of the Ood detection task by capturing the similarity of inlier and outlier datasets.
Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection
- Computer ScienceArXiv
- 2019
By training a simple 3-layers neural network on top of the logit activations of an already pretrained neural network, this work shows that this network is able to detect misclassifications at a competitive level.
Class-wise Thresholding for Detecting Out-of-Distribution Data
- Computer ScienceArXiv
- 2021
The problem of detecting Out-ofDistribution input data when using deep neural networks is considered, and a class-wise thresholding scheme is proposed that can apply to most existing OoD detection algorithms and can maintain similar OoD Detection performance even in the presence of label shift.
References
SHOWING 1-10 OF 54 REFERENCES
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
- Computer Science2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision.
Learning Multiple Layers of Features from Tiny Images
- Computer Science
- 2009
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Calibration of Confidence Measures in Speech Recognition
- Computer ScienceIEEE Transactions on Audio, Speech, and Language Processing
- 2011
Three confidence calibration methods have been developed and the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio are demonstrated.
The Case against Accuracy Estimation for Comparing Induction Algorithms
- Computer ScienceICML
- 1998
This work describes and demonstrates what it believes to be the proper use of ROC analysis for comparative studies in machine learning research, and argues that this methodology is preferable both for making practical choices and for drawing conclusions.
Unsupervised Risk Estimation Using Only Conditional Independence Structure
- Computer ScienceNIPS
- 2016
We show how to estimate a model's test error from unlabeled data, on distributions very different from the training distribution, while assuming only that certain conditional independencies are…
Posterior calibration and exploratory analysis for natural language processing models
- Computer ScienceEMNLP
- 2015
It is argued that the quality of a model' s posterior distribution can and should be directly evaluated, as to whether probabilities correspond to empirical frequencies, and NLP uncertainty can be projected not only to pipeline components, but also to exploratory data analysis, telling a user when to trust and not trust the NLP analysis.
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
- Computer ScienceIEEE Transactions on Signal Processing
- 2016
It is formally proved that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data.
The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets
- Computer SciencePloS one
- 2015
It is shown that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity.
Explaining and Harnessing Adversarial Examples
- Computer ScienceICLR
- 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
- Computer ScienceICML
- 2006
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.