• Corpus ID: 54558282

Deep Anomaly Detection with Outlier Exposure

@article{Hendrycks2019DeepAD,
  title={Deep Anomaly Detection with Outlier Exposure},
  author={Dan Hendrycks and Mantas Mazeika and Thomas G. Dietterich},
  journal={ArXiv},
  year={2019},
  volume={abs/1812.04606}
}
It is important to detect anomalous inputs when deploying machine learning systems. [] Key Method This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this…
Latent Outlier Exposure for Anomaly Detection with Contaminated Data
TLDR
This work proposes a strategy for training an anomaly detector in the presence of unlabeled anomalies that is compatible with a broad class of models and uses a combination of two losses that share parameters: one for the normal andOne for the anomalous data.
Dense anomaly detection by robust learning on synthetic negative data
TLDR
Synthetic negative patches are generated with synthetic negative patches which simultaneously achieve high inlier likelihood and uniform discriminative prediction and are proposed to detect anomalies according to a principled informationtheoretic criterion which can be consistently applied through training and inference.
Deep Anomaly Detection by Residual Adaptation
TLDR
This paper proposes a novel approach to deep anomaly detection based on augmenting large Pretrained networks with residual corrections that adjusts them to the task of anomaly detection that gives rise to a highly parameter-efficient learning mechanism, enhances disentanglement of representations in the pretrained model, and outperforms all existing anomaly detection methods including other baselines utilizing pretrained networks.
PANDA - Adapting Pretrained Features for Anomaly Detection
TLDR
This work proposes two methods for combating collapse, a variant of early stopping that dynamically learns the stopping iteration and elastic regularization inspired by continual learning, which outperforms the state-of-the-art in the one-class and outlier exposure settings.
Are all outliers alike? On Understanding the Diversity of Outliers for Detecting OODs
TLDR
A taxonomy of OOD outlier inputs based on their source and nature of uncertainty is presented and a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers is developed.
AutoOD: Neural Architecture Search for Outlier Detection
TLDR
This paper proposes AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model within a predefined search space, and introduces an experience replay mechanism based on self-imitation learning to improve the sample efficiency.
CutPaste: Self-Supervised Learning for Anomaly Detection and Localization
TLDR
This work proposes a two-stage framework for building anomaly detectors using normal training data only, which first learns self-supervised deep representations and then builds a generative one-class classifier on learned representations.
Unsupervised Learning of Multi-level Structures for Anomaly Detection
TLDR
A novel method to generate anomalous data by breaking up global structures while preserving local structures of normal data at multiple levels and aggregating the outputs of all level-specific detectors to obtain a model that can detect all potential anomalies.
Deep Semi-Supervised Anomaly Detection
TLDR
This work presents Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection, and introduces an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy the anomalous distribution, which can serve as a theoretical interpretation for the method.
A RE ALL OUTLIERS ALIKE ?
TLDR
A taxonomy of OOD outlier inputs based on their source and nature of uncertainty is presented and a novel integrated detection approach that uses multiple attributes corresponding to different types of outliers is developed.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 52 REFERENCES
Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection
TLDR
A Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection, which significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.
Learning Confidence for Out-of-Distribution Detection in Neural Networks
TLDR
This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.
Systematic construction of anomaly detection benchmarks from real data
TLDR
A methodology for transforming existing classification data sets into ground-truthed benchmark data sets for anomaly detection, which produces data sets that vary along three important dimensions: point difficulty, relative frequency of anomalies, and clusteredness.
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
TLDR
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories.
On Calibration of Modern Neural Networks
TLDR
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations
TLDR
The first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called Icons-50 which opens research on a new kind of robustness, surface variation robustness.
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Exploring the Limits of Weakly Supervised Pretraining
TLDR
This paper presents a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images and shows improvements on several image classification and object detection tasks, and reports the highest ImageNet-1k single-crop, top-1 accuracy to date.
...
1
2
3
4
5
...