• Corpus ID: 235363987

Mean-Shifted Contrastive Loss for Anomaly Detection

@article{Reiss2021MeanShiftedCL,
  title={Mean-Shifted Contrastive Loss for Anomaly Detection},
  author={Tal Reiss and Yedid Hoshen},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.03844}
}
Deep anomaly detection methods learn representations that separate between normal and anomalous samples. Very effective representations are obtained when powerful externally trained feature extractors (e.g. ResNets pre-trained on ImageNet) are fine-tuned on the training data which consists of normal samples and no anomalies. However, this is a difficult task that can suffer from catastrophic collapse, i.e. it is prone to learning trivial and non-specific features. In this paper, we propose a… 

Figures and Tables from this paper

No Shifted Augmentations (NSA): compact distributions for robust self-supervised Anomaly Detection
TLDR
This work investigates how the geometrical compactness of the ID feature distribution makes isolating and detecting outliers easier, especially in the realistic situation when ID training data is polluted, and proposes novel architectural modifications to the self-supervised feature learning step, that enable such compact distributions for ID data to be learned.
Transformaly - Two (Feature Spaces) Are Better Than One
TLDR
Transformationaly exploits a pre-trained Vision Transformer to extract both feature vectors: the pre- trained (agnostic) features and the teacher-student (fine-tuned) features, and reports state-of-the-art AUROC results.
Approaches Toward Physical and General Video Anomaly Detection
TLDR
The Physical Anomalous Trajectory or Motion dataset is introduced, which contains six different video classes, which differs in the presented phenomena, the normal class variability, and the kind of anomalies in the videos.
Out-of-Distribution Detection without Class Labels
TLDR
This work discovers that classifiers learned by self-supervised image clustering methods provide a strong baseline for anomaly detection on unlabeled multi-class datasets and finetune pretrained features on the task of classifying images by their cluster labels and uses the cluster labels as “pseudo supervision” for out-of-distribution methods.
Deep One-Class Classification via Interpolated Gaussian Descriptor
TLDR
Interpolated Gaussian descriptor (IGD) method is introduced, a novel OCC model that learns a one-class Gaussian anomaly classifier trained with adversarially interpolated training samples that achieves better detection accuracy than current state-of-the-art models and shows better robustness in problems with small or contaminated training sets.
Data Invariants to Understand Unsupervised Out-of-Distribution Detection
TLDR
A characterization of U-OOD is proposed based on the invariants of the training dataset and it is shown how this characterization is unknowingly embodied in the top-scoring MahaAD method, thereby explaining its quality.
An Empirical Investigation of 3D Anomaly Detection and Segmentation
TLDR
A simple 3D-only method is uncovered that outperforms all recent approaches while not using deep learning, external pretraining datasets, or color information, and is combined with 2D color features.
Self-supervised Multi-class Pre-training for Unsupervised Anomaly Detection and Segmentation in Medical Images
TLDR
A new self-supervised pre-training method for UAD designed for MIA applications, named Multi-class Strong Augmentation via Contrastive Learning (MSACL), based on a novel optimisation to contrast normal and multiple classes of synthetised abnormal images.
Multi-centred Strong Augmentation via Contrastive Learning for Unsupervised Lesion Detection and Segmentation
TLDR
A novel self-supervised UAD pre-training algorithm, named Multi-centred Strong Augmentation via Contrastive Learning (MSACL), which improves these SOTA UAD models on four medical imaging datasets from diverse organs, namely colonoscopy, fundus screening and covid-19 chest-ray datasets.
OCFormer: One-Class Transformer Network for Image Classification
TLDR
A novel deep learning framework based on Vision Transformers (ViT) for one-class classification to use zero-centered Gaussian noise as a pseudo-negative class for latent space representation and then train the network using the optimal loss function.
...
1
2
...

References

SHOWING 1-10 OF 37 REFERENCES
PANDA - Adapting Pretrained Features for Anomaly Detection
TLDR
This work proposes two methods for combating collapse, a variant of early stopping that dynamically learns the stopping iteration and elastic regularization inspired by continual learning, which outperforms the state-of-the-art in the one-class and outlier exposure settings.
MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection
TLDR
This work introduces the MVTec Anomaly Detection (MVTec AD) dataset containing 5354 high-resolution color images of different object and texture categories, and conducts a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods based on deep architectures such as convolutional autoencoders, generative adversarial networks, and feature descriptors using pre-trained convolved neural networks.
Deep One-Class Classification
TLDR
This paper introduces a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective and shows the effectiveness of the method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs.
Deep Anomaly Detection Using Geometric Transformations
TLDR
The main idea behind the scheme is to train a multi-class model to discriminate between dozens of geometric transformations applied on all the given images, which generates feature detectors that effectively identify, at test time, anomalous images based on the softmax activation statistics of the model when applied on transformed images.
Classification-Based Anomaly Detection for General Data
TLDR
This work presents a unifying view and proposes an open-set method to relax current generalization assumptions, and extends the applicability of transformation-based methods to non-image data using random affine transformations.
Learning Deep Features for One-Class Classification
TLDR
A novel deep-learning-based approach for one-class transfer learning in which labeled data from an unrelated task is used for feature learning in one- class classification and achieves significant improvements over the state-of-the-art.
Unsupervised Representation Learning by Predicting Image Rotations
TLDR
This work proposes to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input, and demonstrates both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning.
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function.
A Geometric Framework for Unsupervised Anomaly Detection
TLDR
A new geometric framework for unsupervised anomaly detection is presented, which are algorithms that are designed to process unlabeled data to detect anomalies in sparse regions of the feature space.
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
TLDR
This work finds that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions, and greatly benefits out-of-distribution detection on difficult, near-dist distribution outliers.
...
1
2
3
4
...