Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images

@article{Liznerski2022ExposingOE,
  title={Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images},
  author={Philipp Liznerski and Lukas Ruff and Robert A. Vandermeulen and Billy Joe Franks and Klaus-Robert Muller and Marius Kloft},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.11474}
}
Traditionally anomaly detection (AD) is treated as an unsupervised problem utilizing only normal samples due to the intractability of characterizing everything that looks unlike the normal data. However, it has recently been found that unsupervised image anomaly detection can be drastically improved through the utilization of huge corpora of random images to represent anomalousness; a technique which is known as Outlier Exposure . In this paper we show that specialized AD learning methods seem… 

References

SHOWING 1-10 OF 63 REFERENCES
Deep Anomaly Detection with Outlier Exposure
TLDR
In extensive experiments on natural language processing and small- and large-scale vision tasks, it is found that Outlier Exposure significantly improves detection performance and that cutting-edge generative models trained on CIFar-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; OE is used to mitigate this issue.
Rethinking Assumptions in Deep Anomaly Detection
TLDR
It is found that the multiscale structure of image data makes example anomalies exceptionally informative, and classifiers trained to discern between normal samples and just a few random natural images are able to outperform the current state of the art in deep AD.
Anomaly Detection With Multiple-Hypotheses Predictions
TLDR
The multiple-hypothesesbased anomaly detection framework allows the reliable identification of out-of-distribution samples and is criticized by a discriminator, which prevents artificial data modes not supported by data, and enforces diversity across hypotheses.
GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training
TLDR
This work introduces a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space and shows the model efficacy and superiority over previous state-of-the-art approaches.
Deep Semi-Supervised Anomaly Detection
TLDR
This work presents Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection, and introduces an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy the anomalous distribution, which can serve as a theoretical interpretation for the method.
Image Anomaly Detection with Generative Adversarial Networks
TLDR
This work proposes a novel approach to anomaly detection using generative adversarial networks, based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous.
Toward Supervised Anomaly Detection
TLDR
It is argued that semi-supervised anomaly detection needs to ground on the unsupervised learning paradigm and devise a novel algorithm that meets this requirement and it is shown that the optimization problem has a convex equivalent under relatively mild assumptions.
Latent Space Autoregression for Novelty Detection
TLDR
This proposal designs a general unsupervised framework where a deep autoencoder is equipped with a parametric density estimator that learns the probability distribution underlying the latent representations with an autoregressive procedure and shows that a maximum likelihood objective effectively acts as a regularizer for the task at hand.
Deep Anomaly Detection Using Geometric Transformations
TLDR
The main idea behind the scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images, which generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images.
Explainable Deep One-Class Classification
TLDR
An explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap, which yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet.
...
...