Self-Supervised Anomaly Detection by Self-Distillation and Negative Sampling

  title={Self-Supervised Anomaly Detection by Self-Distillation and Negative Sampling},
  author={Nima Rafiee and Rahil Gholamipoorfard and Nikolas Adaloglou and Simon Jaxy and Julius Ramakers and Markus Kollmann},
  booktitle={International Conference on Artificial Neural Networks},
Detecting whether examples belong to a given indistribution or are Out-Of-Distribution (OOD) requires identifying features specific to the in-distribution. In the absence of labels, these features can be learned by selfsupervised techniques under the generic assumption that the most abstract features are those which are statistically most over-represented in comparison to other distributions from the same domain. In this work, we show that selfdistillation of the in-distribution training set… 
1 Citations

Self-Supervised Anomaly Detection: A Survey and Outlook

This paper aims to review the current approaches in self-supervised anomaly detection, presenting technical details of the common approaches and discussing their strengths and drawbacks and comparing the performance of these models against each other and other state-of-the-art anomaly detection models.



Self-Supervised Learning for Generalizable Out-of-Distribution Detection

This work proposes a new technique relying on self-supervision for generalizable out-of-distribution (OOD) feature learning and rejecting those samples at the inference time, which does not need to pre-know the distribution of targeted OOD samples and incur no extra overheads compared to other methods.

Contrastive Training for Improved Out-of-Distribution Detection

This paper proposes and investigates the use of contrastive training to boost OOD detection performance, and introduces and employs the Confusion Log Probability (CLP) score, which quantifies the difficulty of the Ood detection task by capturing the similarity of inlier and outlier datasets.

Exploring the Limits of Out-of-Distribution Detection

It is demonstrated that large-scale pre-trained transformers can significantly improve the state-of-the-art (SOTA) on a range of near OOD tasks across different data modalities, and a new way of using just the names of outlier classes as a sole source of information without any accompanying images is explored.

Likelihood Ratios for Out-of-Distribution Detection

This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics.

Masked Contrastive Learning for Anomaly Detection

A task-specific variant of contrastive learning named masked Contrastive learning is proposed, which is more befitted for anomaly detection and a new inference method dubbed self-ensemble inference that further boosts performance by leveraging the ability learned through auxiliary self-supervision tasks is proposed.

SSD: A Unified Framework for Self-Supervised Outlier Detection

SSD, an outlier detector based on only unlabeled training data is proposed, which uses self-supervised representation learning followed by a Mahalanobis distance based detection in the feature space and outperforms existing detectors based on unlabeling data by a large margin.

Learning Confidence for Out-of-Distribution Detection in Neural Networks

This work proposes a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs, and addresses the problem of calibrating out-of-distribution detectors.

Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers

A novel margin-based loss over the softmax output which seeks to maintain at least a margin m between the average entropy of the OOD and in-distribution samples and a novel method to combine the outputs of the ensemble of classifiers to obtain OOD detection score and class prediction.

Shifting Transformation Learning for Out-of-Distribution Detection

A simple mechanism for automatically selecting the transformations and modulating their effect on representation learning without requiring any OOD training samples is proposed and outperforms state-of-the-art OOD detection models on several image datasets.

CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances

A simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations, in addition to contrasting a given sample with other instances as in conventional Contrastive learning methods, contrasts the sample with distributionally-shifted augmentations of itself.