Robust outlier detection by de-biasing VAE likelihoods

  title={Robust outlier detection by de-biasing VAE likelihoods},
  author={Kushal Chauhan and Pradeep Shenoy and Manish Gupta and D. Sridharan},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
Deep networks often make confident, yet, incorrect, predictions when tested with outlier data that is far removed from their training distributions. Likelihoods computed by deep generative models (DGMs) are a candidate metric for outlier detection with unlabeled data. Yet, previous studies have shown that DGM likelihoods are unreliable and can be easily biased by simple transformations to input data. Here, we examine outlier detection with variational autoencoders (VAEs), among the simplest of… 

Shaken, and Stirred: Long-Range Dependencies Enable Robust Outlier Detection with PixelCNN++

It is shown that biases in PixelCNN++ likelihoods arise primarily from predictions based on local dependencies, and two families of bijective transformations are proposed that are computationally inexpensive and readily applied at evaluation time to achieve robust outlier detection on images with deep generative models.



Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder

This paper proposes Likelihood Regret, an efficient OOD score for VAEs, and benchmarks the proposed method over existing approaches, and empirical results suggest that it obtains the best overall OOD detection performances compared with other OOD method applied on VAE.

Likelihood Ratios for Out-of-Distribution Detection

This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

WAIC, but Why? Generative Ensembles for Robust Anomaly Detection

Generative Ensembles is proposed, which robustify density-based OoD detection by way of estimating epistemic uncertainty of the likelihood model, and performs surprisingly well in practice.

Do Deep Generative Models Know What They Don't Know?

The density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses from those of house numbers, and such behavior persists even when the flows are restricted to constant-volume transformations.

Bayesian Autoencoders: Analysing and Fixing the Bernoulli likelihood for Out-of-Distribution Detection

This paper suggests the use of Bernoulli likelihood can fail for the dataset pairs FashionMNIST vs MNIST, and proposes two fixes: compute the uncertainty of likelihood estimate by using a Bayesian version of the AE and use alternative distributions to model the likelihood.

Why Normalizing Flows Fail to Detect Out-of-Distribution Data

This work demonstrates that flows learn local pixel correlations and generic image-to-latent-space transformations which are not specific to the target image dataset, and shows that by modifying the architecture of flow coupling layers the authors can bias the flow towards learning the semantic structure of the target data, improving OOD detection.

Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality

This work proposes a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods to determine whether or not inputs reside in the typical set, only requiring that the likelihood can be computed or closely approximated.

Input complexity and out-of-distribution detection with likelihood-based generative models

This paper uses an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison, and finds such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates.

The continuous Bernoulli: fixing a pervasive error in variational autoencoders

A new [0,1]-supported, single parameter distribution is introduced: the continuous Bernoulli, which patches this pervasive bug in VAE and suggests a broader class of performant VAE.