Corpus ID: 220495728

Contrastive Training for Improved Out-of-Distribution Detection

@article{Winkens2020ContrastiveTF,
  title={Contrastive Training for Improved Out-of-Distribution Detection},
  author={Jim Winkens and Rudy Bunel and Abhijit Guha Roy and Robert Stanforth and Vivek Natarajan and J. Ledsam and Patricia MacWilliams and P. Kohli and A. Karthikesalingam and Simon A. A. Kohl and taylan. cemgil and S. Eslami and O. Ronneberger},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.05566}
}
Reliable detection of out-of-distribution (OOD) inputs is increasingly understood to be a precondition for deployment of machine learning systems. This paper proposes and investigates the use of contrastive training to boost OOD detection performance. Unlike leading methods for OOD detection, our approach does not require access to examples labeled explicitly as OOD, which can be difficult to collect in practice. We show in extensive experiments that contrastive training significantly helps OOD… Expand
Multi-task Transformation Learning for Robust Out-of-Distribution Detection
TLDR
A simple framework that leverages multi-task transformation learning for training effective representation for OOD detection which outperforms state-of-the-art OOD Detection performance and robustness on several image datasets is proposed. Expand
Evaluation of Out-of-Distribution Detection Performance of Self-Supervised Learning in a Controllable Environment
TLDR
This work evaluates the out-of-distribution (OOD) detection performance of self-supervised learning (SSL) techniques with a new evaluation framework and demonstrates the improved OOD detection performance in all evaluation settings. Expand
Exploring the Limits of Out-of-Distribution Detection
TLDR
It is demonstrated that large-scale pre-trained transformers can significantly improve the state-of-the-art (SOTA) on a range of near OOD tasks across different data modalities, and a new way of using just the names of outlier classes as a sole source of information without any accompanying images is explored. Expand
Fine-grained Out-of-Distribution Detection with Mixup Outlier Exposure
TLDR
A new DNN training algorithm is proposed, Mixup Outlier Exposure (MixupOE), which leverages an outlier distribution and principles from vicinal risk minimization and can consistently improve fine-grained detection performance, establishing a strong baseline in these more realistic and challenging OOD detection settings. Expand
OODformer: Out-Of-Distribution Detection Transformer
TLDR
This paper proposes a first-of-its-kind OOD detection architecture named OODformer that leverage the contextualization capabilities of the transformer to exploit the object concepts and their discriminate attributes along with their co-occurrence via visual attention. Expand
Contrastive Predictive Coding for Anomaly Detection
TLDR
This paper shows that its patch-wise contrastive loss can directly be interpreted as an anomaly score, and how this allows for the creation of anomaly segmentation masks, and achieves promising results for both anomaly detection and segmentation on the challenging MVTec-AD dataset. Expand
Sample-free white-box out-of-distribution detection for deep learning
  • Jean-Michel Begon, P. Geurts
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2021
Being able to detect irrelevant test examples with respect to deployed deep learning models is paramount to properly and safely using them. In this paper, we address the problem of rejecting suchExpand
Adversarial Self-Supervised Learning for Out-of-Domain Detection
TLDR
A self-supervised contrastive learning framework to model discriminative semantic features of both in-domain intents and OOD intents from unlabeled data is proposed and an adversarial augmentation neural module is introduced to improve the efficiency and robustness of Contrastive learning. Expand
Contrastive Predictive Coding for Anomaly Detection and Segmentation
Reliable detection of anomalies is crucial when deploying machine learning models in practice, but remains challenging due to the lack of labeled data. To tackle this challenge, contrastive learningExpand
Masked Contrastive Learning for Anomaly Detection
TLDR
A task-specific variant of contrastive learning named masked Contrastive learning is proposed, which is more befitted for anomaly detection and a new inference method dubbed selfensemble inference is proposed that further boosts performance by leveraging the ability learned through auxiliary self-supervision tasks. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 39 REFERENCES
Self-Supervised Learning for Generalizable Out-of-Distribution Detection
TLDR
This work proposes a new technique relying on self-supervision for generalizable out-of-distribution (OOD) feature learning and rejecting those samples at the inference time, which does not need to pre-know the distribution of targeted OOD samples and incur no extra overheads compared to other methods. Expand
Likelihood Ratios for Out-of-Distribution Detection
TLDR
This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics. Expand
Learning Confidence for Out-of-Distribution Detection in Neural Networks
TLDR
This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors. Expand
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets. Expand
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin. Expand
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
TLDR
A simple baseline that utilizes probabilities from softmax distributions is presented, showing the effectiveness of this baseline across all computer vision, natural language processing, and automatic speech recognition, and it is shown the baseline can sometimes be surpassed. Expand
Out-of-Distribution Detection using Multiple Semantic Label Representations
TLDR
This work proposes to use multiple semantic dense representations instead of sparse representation as the target label for out-of-distribution detection in neural networks, and evaluated the proposed model on computer vision, and speech commands detection tasks and compared it to previous methods. Expand
Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers
TLDR
A novel margin-based loss over the softmax output which seeks to maintain at least a margin m between the average entropy of the OOD and in-distribution samples and a novel method to combine the outputs of the ensemble of classifiers to obtain OOD detection score and class prediction. Expand
Metric Learning for Novelty and Anomaly Detection
TLDR
This work proposes to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. Expand
Deep Anomaly Detection with Outlier Exposure
TLDR
In extensive experiments on natural language processing and small- and large-scale vision tasks, it is found that Outlier Exposure significantly improves detection performance and that cutting-edge generative models trained on CIFar-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; OE is used to mitigate this issue. Expand
...
1
2
3
4
...