• Corpus ID: 244709089

ReAct: Out-of-distribution Detection With Rectified Activations

@inproceedings{Sun2021ReActOD,
  title={ReAct: Out-of-distribution Detection With Rectified Activations},
  author={Yiyou Sun and Chuan Guo and Yixuan Li},
  booktitle={NeurIPS},
  year={2021}
}
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance in enhancing the safe deployment of neural networks. One of the primary challenges is that models often produce highly confident predictions on OOD data, which undermines the driving principle in OOD detection that the model should only be confident about in-distribution samples. In this work, we propose ReAct—a simple and effective technique for reducing model overconfidence on OOD data. Our… 
Out-of-distribution Detection with Deep Nearest Neighbors
TLDR
This paper explores the efficacy of non-parametric nearest-neighbor distance for OOD detection, which has been largely overlooked in the literature and shows effectiveness on several benchmarks and establishes superior performance under the same model trained on ImageNet-1k.
Provable Guarantees for Understanding Out-of-distribution Detection
TLDR
This work develops an analytical framework that characterizes and unifies the theoretical understanding for OOD detection, and motivates a novel Ood detection method for neural networks, GEM, which demonstrates both theoretical and empirical superiority.
CIDER: Exploiting Hyperspherical Embeddings for Out-of-Distribution Detection
TLDR
CIDER jointly optimizes two losses to promote strong ID-OOD separability: a dispersion loss that promotes large angular distances among different class prototypes, and a compactness loss that encourages samples to be close to their class prototypes.
Generalized Out-of-Distribution Detection: A Survey
TLDR
This survey presents a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD Detection, and OD, and conducts a thorough review of each of the five areas by summarizing their recent technical developments.
On the Importance of Gradients for Detecting Distributional Shifts in the Wild
TLDR
GradNorm is presented, a simple and effective approach for detecting OOD inputs by utilizing information extracted from the gradient space, which employs the vector norm of gradients, backpropagated from the KL divergence between the softmax output and a uniform probability distribution.
Unknown-Aware Object Detection: Learning What You Don't Know from Videos in the Wild
TLDR
A new unknown-aware object detection framework through Spatial-Temporal Unknown Distillation (STUD), which distills unknown objects from videos in the wild and meaningfully regularizes the model’s decision boundary.
ViM: Out-Of-Distribution with Virtual-logit Matching
TLDR
A novel OOD scoring method named Virtual-logit Matching (ViM), which combines the class-agnostic score from feature space and the In-Distribution (ID) class-dependent logits, which is 4% ahead of the best baseline.
The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set Methods
TLDR
The Familiarity Hypothesis that methods based on the computed logits of visual object classifiers give state-of-the-art performance succeed because they are detecting the absence of familiar learned features rather than the presence of novelty is proposed.
Computer Aided Diagnosis and Out-of-Distribution Detection in Glaucoma Screening Using Color Fundus Photography
TLDR
This report introduces an inference-time out-of-distribution (OOD) detection method to identify ungradable images and employs convolutional neural networks to classify input images to ”referable glaucoma” or ”no referable glaunches”.
RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection
TLDR
The method proposed, referred to as RODD, outperforms SOTA detection performance on extensive suite of benchmark datasets on OOD detection tasks and empirically shows that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space.
...
1
2
...

References

SHOWING 1-10 OF 65 REFERENCES
Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data
TLDR
This work bases its work on a popular method ODIN, proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance, and proposing to decompose confidence scoring as well as a modified input pre-processing method.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.
Likelihood Ratios for Out-of-Distribution Detection
TLDR
This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics.
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
TLDR
This work demonstrates that flows learn local pixel correlations and generic image-to-latent-space transformations which are not specific to the target image dataset, and shows that by modifying the architecture of flow coupling layers the authors can bias the flow towards learning the semantic structure of the target data, improving OOD detection.
MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space
  • Rui Huang, Yixuan Li
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
This paper proposes a group-based OOD detection framework, along with a novel OOD scoring function termed MOS, to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundaries between invs.
Generalized Out-of-Distribution Detection: A Survey
TLDR
This survey presents a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD Detection, and OD, and conducts a thorough review of each of the five areas by summarizing their recent technical developments.
On the Importance of Gradients for Detecting Distributional Shifts in the Wild
TLDR
GradNorm is presented, a simple and effective approach for detecting OOD inputs by utilizing information extracted from the gradient space, which employs the vector norm of gradients, backpropagated from the KL divergence between the softmax output and a uniform probability distribution.
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
TLDR
A simple baseline that utilizes probabilities from softmax distributions is presented, showing the effectiveness of this baseline across all computer vision, natural language processing, and automatic speech recognition, and it is shown the baseline can sometimes be surpassed.
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
TLDR
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision.
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
TLDR
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories.
...
1
2
3
4
5
...