Corpus ID: 237532775

Towards Out-of-Distribution Detection with Divergence Guarantee in Deep Generative Models

@inproceedings{Zhang2020TowardsOD,
  title={Towards Out-of-Distribution Detection with Divergence Guarantee in Deep Generative Models},
  author={Yufeng Zhang and Wanwei Liu and Zhenbang Chen and Ji Wang and Zhiming Liu and Kenli Li and Hongmei Wei},
  year={2020}
}
  • Yufeng Zhang, Wanwei Liu, +4 authors Hongmei Wei
  • Published 9 February 2020
  • Computer Science, Mathematics
Recent research has revealed that deep generative models including flow-based models and Variational autoencoders may assign higher likelihoods to out-of-distribution (OOD) data than in-distribution (ID) data. However, we cannot sample out OOD data from the model. This counterintuitive phenomenon has not been satisfactorily explained. In this paper, we prove theorems to investigate the divergences in flow-based model and give two explanations to the above phenomenon from divergence and… Expand

References

SHOWING 1-10 OF 90 REFERENCES
Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality
Recent work has shown that deep generative models can assign higher likelihood to out-of-distribution data sets than to their training data [37, 9]. We posit that this phenomenon is caused by aExpand
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
TLDR
This work demonstrates that flows learn local pixel correlations and generic image-to-latent-space transformations which are not specific to the target image dataset, and shows that by modifying the architecture of flow coupling layers the authors can bias the flow towards learning the semantic structure of the target data, improving OOD detection. Expand
Likelihood Ratios for Out-of-Distribution Detection
TLDR
This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics. Expand
Are generative deep models for novelty detection truly better?
TLDR
Comparison of selected generative deep models and classical anomaly detection methods on an extensive number of non--image benchmark datasets concludes that performance of the generative models is determined by the process of selection of their hyperparameters. Expand
Practical and Consistent Estimation of f-Divergences
TLDR
This work proposes and study an estimator that can be easily implemented, works well in high dimensions, and enjoys faster rates of convergence, and discusses its direct implications for total correlation, entropy, and mutual information estimation. Expand
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
TLDR
This paper proposes a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier, and obtains the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis. Expand
Input complexity and out-of-distribution detection with likelihood-based generative models
TLDR
This paper uses an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison, and finds such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates. Expand
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
TLDR
Flow++ is proposed, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks, and has begun to close the significant performance gap that has so far existed between autoregressive models and flow- based models. Expand
Conditional Generative Models are not Robust
TLDR
The theoretical result reveals that it is impossible to guarantee detectability of adversarial examples even for near-optimal generative classifiers, and results indicate that likelihood may fundamentally be at odds with robust classification on challenging problems. Expand
WAIC, but Why? Generative Ensembles for Robust Anomaly Detection
TLDR
Generative Ensembles is proposed, which robustify density-based OoD detection by way of estimating epistemic uncertainty of the likelihood model, and performs surprisingly well in practice. Expand
...
1
2
3
4
5
...