• Corpus ID: 235363987

Mean-Shifted Contrastive Loss for Anomaly Detection

@article{Reiss2021MeanShiftedCL,
  title={Mean-Shifted Contrastive Loss for Anomaly Detection},
  author={Tal Reiss and Yedid Hoshen},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.03844}
}
Deep anomaly detection methods learn representations that separate between normal and anomalous samples. Very effective representations are obtained when powerful externally trained feature extractors (e.g. ResNets pre-trained on ImageNet) are fine-tuned on the training data which consists of normal samples and no anomalies. However, this is a difficult task that can suffer from catastrophic collapse, i.e. it is prone to learning trivial and non-specific features. In this paper, we propose a… 

Figures and Tables from this paper

No Shifted Augmentations (NSA): compact distributions for robust self-supervised Anomaly Detection
TLDR
This work investigates how the geometrical compactness of the ID feature distribution makes isolating and detecting outliers easier, especially in the realistic situation when ID training data is polluted, and proposes novel architectural modifications to the self-supervised feature learning step, that enable such compact distributions for ID data to be learned.
Self-Supervised Anomaly Detection: A Survey and Outlook
TLDR
This paper aims to review the current approaches in self-supervised anomaly detection, presenting technical details of the common approaches and discussing their strengths and drawbacks and comparing the performance of these models against each other and other state-of-the-art anomaly detection models.
Efficient Anomaly Detection Using Self-Supervised Multi-Cue Tasks
TLDR
This work presents a new out-of-distribution detection function and highlights its better stability compared to existing methods, and evaluates the method on an extensive protocol composed of various anomaly types, from object anomalies, style anomalies with fine-grained classifiers to local anomalies with face anti-spoofing datasets.
Towards Anomaly Detection in Reinforcement Learning
TLDR
This work addresses the question of what AD means in the context of RL and links it to various fields in RL such as lifelong RL and generalization, and identifies non-stationarity to be one of the key drivers for future research on AD in RL.
Transformaly - Two (Feature Spaces) Are Better Than One
TLDR
Transformationaly exploits a pre-trained Vision Transformer to extract both feature vectors: the pre- trained (agnostic) features and the teacher-student (fine-tuned) features, and reports state-of-the-art AUROC results.
Approaches Toward Physical and General Video Anomaly Detection
TLDR
The Physical Anomalous Trajectory or Motion dataset is introduced, which contains six different video classes, which differs in the presented phenomena, the normal class variability, and the kind of anomalies in the videos.
Out-of-Distribution Detection without Class Labels
TLDR
This work discovers that classifiers learned by self-supervised image clustering methods provide a strong baseline for anomaly detection on unlabeled multi-class datasets and finetune pretrained features on the task of classifying images by their cluster labels and uses the cluster labels as “pseudo supervision” for out-of-distribution methods.
Deep One-Class Classification via Interpolated Gaussian Descriptor
TLDR
Interpolated Gaussian descriptor (IGD) method is introduced, a novel OCC model that learns a one-class Gaussian anomaly classifier trained with adversarially interpolated training samples that achieves better detection accuracy than current state-of-the-art models and shows better robustness in problems with small or contaminated training sets.
Data Invariants to Understand Unsupervised Out-of-Distribution Detection
TLDR
A characterization of U-OOD is proposed based on the invariants of the training dataset and it is shown how this characterization is unknowingly embodied in the top-scoring MahaAD method, thereby explaining its quality.
A Survey on Unsupervised Industrial Anomaly Detection Algorithms
TLDR
A thorough overview of recently proposed unsupervised algorithms for visual anomaly detection covering categories, whose innovation points and frameworks are described in detail are provided and expected to assist both the research community and industry in developing a broader and cross-domain perspective.
...
...

References

SHOWING 1-10 OF 37 REFERENCES
PANDA - Adapting Pretrained Features for Anomaly Detection
TLDR
This work proposes two methods for combating collapse, a variant of early stopping that dynamically learns the stopping iteration and elastic regularization inspired by continual learning, which outperforms the state-of-the-art in the one-class and outlier exposure settings.
MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection
TLDR
This work introduces the MVTec Anomaly Detection (MVTec AD) dataset containing 5354 high-resolution color images of different object and texture categories, and conducts a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods based on deep architectures such as convolutional autoencoders, generative adversarial networks, and feature descriptors using pre-trained convolved neural networks.
Deep One-Class Classification
TLDR
This paper introduces a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective and shows the effectiveness of the method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs.
Deep Anomaly Detection Using Geometric Transformations
TLDR
The main idea behind the scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images, which generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images.
Classification-Based Anomaly Detection for General Data
TLDR
This work presents a unifying view and proposes an open-set method to relax current generalization assumptions, and extends the applicability of transformation-based methods to non-image data using random affine transformations.
CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances
TLDR
A simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations, in addition to contrasting a given sample with other instances as in conventional Contrastive learning methods, contrasts the sample with distributionally-shifted augmentations of itself.
Learning Deep Features for One-Class Classification
TLDR
A novel deep-learning-based approach for one-class transfer learning in which labeled data from an unrelated task is used for feature learning in one- class classification and achieves significant improvements over the state-of-the-art.
Unsupervised Representation Learning by Predicting Image Rotations
TLDR
This work proposes to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input, and demonstrates both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning.
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function.
Learning and Evaluating Representations for Deep One-class Classification
TLDR
A novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of contrastive representations and argues that classifiers inspired by the statistical perspective in generative or discriminative models are more effective than existing approaches.
...
...