MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space

@article{Huang2021MOSTS,
  title={MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space},
  author={Rui Huang and Yixuan Li},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={8706-8715}
}
  • Rui Huang, Yixuan Li
  • Published 5 May 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Detecting out-of-distribution (OOD) inputs is a central challenge for safely deploying machine learning models in the real world. Existing solutions are mainly driven by small datasets, with low resolution and very few class labels (e.g., CIFAR). As a result, OOD detection for large-scale image classification tasks remains largely unexplored. In this paper, we bridge this critical gap by proposing a group-based OOD detection framework, along with a novel OOD scoring function termed MOS. Our key… 
CIDER: Exploiting Hyperspherical Embeddings for Out-of-Distribution Detection
TLDR
CIDER jointly optimizes two losses to promote strong ID-OOD separability: a dispersion loss that promotes large angular distances among different class prototypes, and a compactness loss that encourages samples to be close to their class prototypes.
Concept-based Explanations for Out-Of-Distribution Detectors
TLDR
A framework for learning a set of concepts that satisfy the desired properties of detection completeness and concept separability is proposed and demonstrated’s effectiveness in providing concept-based explanations for diverse OOD techniques is demonstrated.
Out-of-distribution Detection with Deep Nearest Neighbors
TLDR
This paper explores the efficacy of non-parametric nearest-neighbor distance for OOD detection, which has been largely overlooked in the literature and shows effectiveness on several benchmarks and establishes superior performance under the same model trained on ImageNet-1k.
ReAct: Out-of-distribution Detection With Rectified Activations
TLDR
This work proposes ReAct—a simple and effective technique for reducing model overconfidence on OOD data, motivated by novel analysis on internal activations of neural networks, which displays highly distinctive signature patterns for OOD distributions.
Mixture Outlier Exposure for Out-of-Distribution Detection in Fine-Grained Settings
TLDR
This work proposes Mixture Outlier Exposure (MixOE), which effectively expands the covered OOD region by mixing ID data and training outliers, and regularizes the model behaviour by linearly decaying the prediction confidence as the input transitions from ID to OOD.
How Useful are Gradients for OOD Detection Really?
TLDR
A general, non-gradient based method of OOD detection which improves over previous baselines in both performance and computational efficiency is proposed.
RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection
TLDR
The method proposed, referred to as RODD, outperforms SOTA detection performance on extensive suite of benchmark datasets on OOD detection tasks and empirically shows that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space.
On the Importance of Gradients for Detecting Distributional Shifts in the Wild
TLDR
GradNorm is presented, a simple and effective approach for detecting OOD inputs by utilizing information extracted from the gradient space, which employs the vector norm of gradients, backpropagated from the KL divergence between the softmax output and a uniform probability distribution.
On the Effectiveness of Sparsification for Detecting the Deep Unknowns
TLDR
The key idea is to rank weights based on a measure of contribution, and selectively use the most salient weights to derive the output for OOD detection, resulting in a sharper output distribution and stronger separability from ID data.
Provable Guarantees for Understanding Out-of-distribution Detection
TLDR
This work develops an analytical framework that characterizes and unifies the theoretical understanding for OOD detection, and motivates a novel Ood detection method for neural networks, GEM, which demonstrates both theoretical and empirical superiority.
...
...

References

SHOWING 1-10 OF 54 REFERENCES
Discriminative out-of-distribution detection for semantic segmentation
TLDR
The proposed approach to discriminative detection of OOD pixels in input data succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.
MOOD: Multi-level Out-of-distribution Detection
TLDR
This paper proposes a novel framework, multi-level out-of-distribution detection (MOOD), which exploits intermediate classifier outputs for dynamic and efficient OOD inference, and extensively evaluates MOOD across 10 OOD datasets spanning a wide range of complexities.
Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data
TLDR
This work bases its work on a popular method ODIN, proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance, and proposing to decompose confidence scoring as well as a modified input pre-processing method.
Overcoming Classifier Imbalance for Long-Tail Object Detection With Balanced Group Softmax
  • Yu Li, Tao Wang, Jiashi Feng
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This work provides the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution and proposes a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training.
Likelihood Ratios for Out-of-Distribution Detection
TLDR
This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics.
Are Out-of-Distribution Detection Methods Effective on Large-Scale Datasets?
TLDR
It is found that input perturbation and temperature scaling yield the best performance on large scale datasets regardless of the feature space regularization strategy.
Label Embedding Trees for Large Multi-Class Tasks
TLDR
An algorithm for learning a tree-structure of classifiers which, by optimizing the overall tree loss, provides superior accuracy to existing tree labeling methods and a method that learns to embed labels in a low dimensional space that is faster than non-embedding approaches and has superior accuracyto existing embedding approaches are proposed.
Robust Out-of-distribution Detection via Informative Outlier Mining
TLDR
This paper proposes a simple and effective method, Adversarial Training with informative Outlier Mining (ATOM), to robustify OOD detection and shows that, by carefully choosing which outliers to train on, one can significantly improve the robustness of the OOD detector.
ATOM: Robustifying Out-of-Distribution Detection Using Outlier Mining
TLDR
It is shown that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.
...
...