• Corpus ID: 232417176

Neural Transformation Learning for Deep Anomaly Detection Beyond Images

@inproceedings{Qiu2021NeuralTL,
  title={Neural Transformation Learning for Deep Anomaly Detection Beyond Images},
  author={Chen Qiu and Timo Pfrommer and M. Kloft and Stephan Mandt and Maja R. Rudolph},
  booktitle={ICML},
  year={2021}
}
Data transformations (e.g. rotations, reflections, and cropping) play an important role in selfsupervised learning. Typically, images are transformed into different views, and neural networks trained on tasks involving these views produce useful feature representations for downstream tasks, including anomaly detection. However, for anomaly detection beyond image data, it is often unclear which transformations to use. Here we present a simple end-to-end procedure for anomaly detection with… 
Self-Supervised Anomaly Detection via Neural Autoregressive Flows with Active Learning
TLDR
This work proposes a novel active learning (AL) scheme that relied on neural autoregressive flows (NAF) for self-supervised anomaly detection, specifically on small-scale data, which outperforms existing baselines on multiple time series and tabular datasets and a real-world application in advanced manufacturing.
Detecting Anomalies within Time Series using Local Neural Transformations
TLDR
Local Neural Transformations (LNT) is developed, a method learning local transformations of time series from data that produces an anomaly score for each time step and thus can be used to detect anomalies within time series.
Contrastive Predictive Coding for Anomaly Detection and Segmentation
TLDR
This paper shows that its patch-wise contrastive loss can directly be interpreted as an anomaly score, and how this allows for the creation of anomaly segmentation masks, and achieves promising results for both anomaly detection and segmentation on the challenging MVTec-AD dataset.
TracInAD: Measuring Influence for Anomaly Detection
TLDR
The proposed methodology to detect anomalies based on TracIn, an influence measure initially introduced for explicability purposes, achieves comparable or better performance in terms of detection accuracy on medical and cyber-security tabular benchmark data.
Latent Outlier Exposure for Anomaly Detection with Contaminated Data
TLDR
This work proposes a strategy for training an anomaly detector in the presence of unlabeled anomalies that is compatible with a broad class of models and uses a combination of two losses that share parameters: one for the normal andOne for the anomalous data.
Data-Efficient and Interpretable Tabular Anomaly Detection
TLDR
A novel AD framework is proposed that adapts a white-box model class, Generalized Additive Models, to detect anomalies using a partial identification objective which naturally handles noisy or heterogeneous features and can incorporate a small amount of labeled data to further boost anomaly detection performances in semisupervised settings.
ADBench: Anomaly Detection Benchmark
TLDR
This work conducts the most comprehensive anomaly detection benchmark with 30 algorithms on 55 benchmark datasets, named ADBench, to identify meaningful insights into the role of supervision and anomaly types, and unlock future directions for researchers in algorithm selection and design.
Perturbation Learning Based Anomaly Detection
TLDR
Compared with the state-of-the-art of anomaly detection, this method does not require any assumption about the shape of the decision boundary and has fewer hyper-parameters to determine.
Raising the Bar in Graph-level Anomaly Detection
TLDR
This paper presents a new deep learning approach that significantly improves existing deep one-class approaches for graphs by fixing some of their known problems, including hypersphere collapse and performance flip.
Hyperparameter Sensitivity in Deep Outlier Detection: Analysis and a Scalable Hyper-Ensemble Solution
TLDR
This paper conducts the first large-scale analysis on the HP sensitivity of deep OD methods, and through more than 35,000 trained models, quantitatively demonstrates that model selection is inevitable and designs a HP-robust and scalable deep hyper-ensemble model called ROBOD that assembles models with varying HP configurations, bypassing the choice paralysis.
...
...

References

SHOWING 1-10 OF 69 REFERENCES
Deep Anomaly Detection with Outlier Exposure
TLDR
In extensive experiments on natural language processing and small- and large-scale vision tasks, it is found that Outlier Exposure significantly improves detection performance and that cutting-edge generative models trained on CIFar-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; OE is used to mitigate this issue.
Deep Anomaly Detection Using Geometric Transformations
TLDR
The main idea behind the scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images, which generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images.
GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training
TLDR
This work introduces a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space and shows the model efficacy and superiority over previous state-of-the-art approaches.
Image Anomaly Detection with Generative Adversarial Networks
TLDR
This work proposes a novel approach to anomaly detection using generative adversarial networks, based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous.
Learning to Compose Domain-Specific Transformations for Data Augmentation
TLDR
The proposed method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data, which can be used to perform data augmentation for any end discriminative model.
Deep One-Class Classification
TLDR
This paper introduces a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective and shows the effectiveness of the method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs.
Deep Semi-Supervised Anomaly Detection
TLDR
This work presents Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection, and introduces an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy the anomalous distribution, which can serve as a theoretical interpretation for the method.
LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection
TLDR
This work proposes a Long Short Term Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD) that learns to reconstruct 'normal' time-series behavior, and thereafter uses reconstruction error to detect anomalies.
Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery
TLDR
AnoGAN, a deep convolutional generative adversarial network is proposed to learn a manifold of normal anatomical variability, accompanying a novel anomaly scoring scheme based on the mapping from image space to a latent space.
Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection
TLDR
A Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection, which significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.
...
...