• Corpus ID: 235435763

Robust Out-of-Distribution Detection on Deep Probabilistic Generative Models

@article{Choi2021RobustOD,
  title={Robust Out-of-Distribution Detection on Deep Probabilistic Generative Models},
  author={Jaemoo Choi and Changyeon Yoon and Jeongwoo Bae and Myung-joo Kang},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.07903}
}
Out-of-distribution (OOD) detection is an important task in machine learning systems for ensuring their reliability and safety. Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample. However, such models frequently assign a suspiciously high likelihood to a specific outlier. Several recent works have addressed this issue by training a neural network with auxiliary outliers, which are generated by perturbing the input data. In this paper, we… 

Sum-Product-Attention Networks: Leveraging Self-Attention in Energy-Based Probabilistic Circuits

Sum-Product-Attention Networks (SPAN) is introduced, a novel energy-based generative model that integrates probabilistic circuits with the self-attention mechanism of Transformers to prove the modeling capability of EBMs.

Model-agnostic out-of-distribution detection using combined statistical tests

These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model and can be competitive with model-specific out-of-distribution detection algorithms without any assumptions on the out-dist distribution.

References

SHOWING 1-10 OF 45 REFERENCES

Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality

This work proposes a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods to determine whether or not inputs reside in the typical set, only requiring that the likelihood can be computed or closely approximated.

Likelihood Ratios for Out-of-Distribution Detection

This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics.

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.

Input complexity and out-of-distribution detection with likelihood-based generative models

This paper uses an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison, and finds such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.

Learning Confidence for Out-of-Distribution Detection in Neural Networks

This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors.

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

This paper proposes a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier, and obtains the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis.

Why Normalizing Flows Fail to Detect Out-of-Distribution Data

This work demonstrates that flows learn local pixel correlations and generic image-to-latent-space transformations which are not specific to the target image dataset, and shows that by modifying the architecture of flow coupling layers the authors can bias the flow towards learning the semantic structure of the target data, improving OOD detection.

Hierarchical VAEs Know What They Don't Know

This work develops a fast, scalable and fully unsupervised likelihoodratio score for OOD detection that requires data to be in-distribution across all feature-levels, and benchmarks the method on a vast set of data and model combinations and achieves state-of-the-art results.

SSD: A Unified Framework for Self-Supervised Outlier Detection

SSD, an outlier detector based on only unlabeled training data is proposed, which uses self-supervised representation learning followed by a Mahalanobis distance based detection in the feature space and outperforms existing detectors based on unlabeling data by a large margin.