Corpus ID: 3464416

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

@article{Lee2018TrainingCC,
  title={Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples},
  author={Kimin Lee and Honglak Lee and Kibok Lee and Jinwoo Shin},
  journal={ArXiv},
  year={2018},
  volume={abs/1711.09325}
}
The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural… Expand
Out-of-distribution Detection in Classifiers via Generation
TLDR
A novel algorithm to generate out-of-distribution samples using a manifold learning network and then train an n+1 classifier for OOD detection, where the $n+1^{th}$ class represents the OOD samples is proposed. Expand
Analysis of Confident-Classifiers for Out-of-distribution Detection
TLDR
This paper suggests training a classifier by adding an explicit "reject" class for OOD samples by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KL divergence between the predictive distribution of Ood samples in the low-density regions of in-Distribution and the uniform distribution. Expand
Building robust classifiers through generation of confident out of distribution examples
TLDR
This paper introduces an alternative GAN based approach for building a robust classifier, where the idea is to use the GAN to explicitly generate out of distribution samples that the classifier is confident on (low entropy), and have the classifiers maximize the entropy for these samples. Expand
Training Reject-Classifiers for Out-of-distribution Detection via Explicit Boundary Sample Generation
Discriminatively trained neural classifiers can be trusted only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples isExpand
Learning Confidence for Out-of-Distribution Detection in Neural Networks
TLDR
This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors. Expand
OUT-OF-DISTRIBUTION DETECTION USING LAYER-
  • 2019
In this paper, we tackle the problem of detecting samples that are not drawn from the training distribution, i.e., out-of-distribution (OOD) samples, in classification. Many previous studies haveExpand
A Less Biased Evaluation of Out-of-distribution Sample Detectors
TLDR
OD-test, a three-dataset evaluation scheme as a more reliable strategy to assess progress on the problem of out-of-distribution samples in deep learning, and shows that the previous techniques have low accuracy and are not reliable in practice. Expand
Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy
  • Qing Yu, K. Aizawa
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
A two-head deep convolutional neural network is proposed and maximize the discrepancy between the two classifiers to detect OOD inputs and significantly outperforms other state-of-the-art methods on several OOD detection benchmarks and two cases of real-world simulation. Expand
Improving robustness of classifiers by training against live traffic
TLDR
An adaptive regularization technique (based on the maximum predictive probability score of a sample) which penalizes out of distribution samples more heavily than in distribution samples in the incoming traffic ensures that the overall performance of the classifier does not degrade on in-distribution data, while detection of out-of-dist distribution samples is significantly improved by leveraging the unlabeled traffic data. Expand
Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers
TLDR
A novel margin-based loss over the softmax output which seeks to maintain at least a margin m between the average entropy of the OOD and in-distribution samples and a novel method to combine the outputs of the ensemble of classifiers to obtain OOD detection score and class prediction. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 38 REFERENCES
Principled Detection of Out-of-Distribution Examples in Neural Networks
TLDR
ODIN is proposed, a simple and effective out-of-distribution detector for neural networks, that does not require any change to a pre-trained model, and is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in and out of distribution samples, allowing for more effective detection. Expand
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
TLDR
A simple baseline that utilizes probabilities from softmax distributions is presented, showing the effectiveness of this baseline across all computer vision, natural language processing, and automatic speech recognition, and it is shown the baseline can sometimes be surpassed. Expand
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin. Expand
On Calibration of Modern Neural Networks
TLDR
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions. Expand
Confident Multiple Choice Learning
TLDR
This paper proposes new ensemble methods specialized for deep neural networks, called confident multiple choice learning (CMCL), a variant of multiple choiceLearning (MCL) via addressing its overconfidence, and demonstrates the effect via experiments on the image classification on CIFAR and SVHN, and the foreground-background segmentation on the iCoseg. Expand
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
TLDR
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories. Expand
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Expand
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro
TLDR
A simple semisupervised pipeline that only uses the original training set without collecting extra data, which effectively improves the discriminative ability of learned CNN embeddings and proposes the label smoothing regularization for outliers (LSRO). Expand
...
1
2
3
4
...