PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation
@article{Reiss2021PANDAAP, title={PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation}, author={Tal Reiss and Niv Cohen and Liron Bergman and Yedid Hoshen}, journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2021}, pages={2805-2813} }
Anomaly detection methods require high-quality features. In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning. Surprisingly, a very promising direction, using pre-trained deep features, has been mostly overlooked. In this paper, we first empirically establish the perhaps expected, but unreported result, that combining pre-trained features with simple anomaly detection and segmentation methods…
Figures and Tables from this paper
41 Citations
Out-of-Distribution Detection without Class Labels
- Computer ScienceArXiv
- 2021
This work discovers that classifiers learned by self-supervised image clustering methods provide a strong baseline for anomaly detection on unlabeled multi-class datasets and finetune pretrained features on the task of classifying images by their cluster labels and uses the cluster labels as “pseudo supervision” for out-of-distribution methods.
A Unified Model for Multi-class Anomaly Detection
- Computer ScienceArXiv
- 2022
This work presents UniAD, a UniAD that accomplishes anomaly detection for multiple classes with a unified framework, and proposes a feature jittering strategy that urges the model to recover the correct message even with noisy inputs.
Unsupervised Word Segmentation using K Nearest Neighbors
- Computer ScienceArXiv
- 2022
An unsupervised kNN-based approach for word segmentation in speech utterances that relies on self-supervised pre-trained speech representations, and com-pares each audio segment of a given utterance to its K nearest neighbors within the training set.
No Shifted Augmentations (NSA): compact distributions for robust self-supervised Anomaly Detection
- Computer ScienceArXiv
- 2022
This work investigates how the geometrical compactness of the ID feature distribution makes isolating and detecting outliers easier, especially in the realistic situation when ID training data is polluted, and proposes novel architectural modifications to the self-supervised feature learning step, that enable such compact distributions for ID data to be learned.
Deep One-Class Classification via Interpolated Gaussian Descriptor
- Computer Science
- 2021
Interpolated Gaussian descriptor (IGD) method is introduced, a novel OCC model that learns a one-class Gaussian anomaly classifier trained with adversarially interpolated training samples that achieves better detection accuracy than current state-of-the-art models and shows better robustness in problems with small or contaminated training sets.
Benchmarking Unsupervised Anomaly Detection and Localization
- Computer ScienceArXiv
- 2022
A comprehensive comparison of existing methods in unsupervised anomaly detection and localization tasks and adds a comparison of inference previously ignored by the community to inspire further research.
Fake It Till You Make It: Near-Distribution Novelty Detection by Score-Based Generative Models
- Computer ScienceArXiv
- 2022
Overall, the method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
Self-Supervised Anomaly Detection: A Survey and Outlook
- Computer ScienceArXiv
- 2022
This paper aims to review the current approaches in self-supervised anomaly detection, presenting technical details of the common approaches and discussing their strengths and drawbacks and comparing the performance of these models against each other and other state-of-the-art anomaly detection models.
Catching Both Gray and Black Swans: Open-set Supervised Anomaly Detection
- Computer ScienceArXiv
- 2022
This paper proposes a novel approach that learns disentangled representations of abnormalities illustrated by seen anomalies, pseudo anomalies, and latent residual anomalies (i.e., samples that have unusual residuals compared to the normal data in a latent space), with the last two abnormalities designed to detect unseen anomalies.
Anomaly Detection via Reverse Distillation from One-Class Embedding
- Computer ScienceArXiv
- 2022
This work proposes a novel T-S model consist-ing of a teacher encoder and a student decoder and introduces a simple yet effective ”reverse distillation” paradigm, which surpasses SOTA performance and proves the proposed approach’s effectiveness and generalizability.
References
SHOWING 1-10 OF 48 REFERENCES
Overcoming catastrophic forgetting in neural networks
- Computer ScienceProceedings of the National Academy of Sciences
- 2017
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Learning Multiple Layers of Features from Tiny Images
- Computer Science
- 2009
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
- Computer ScienceNeurIPS
- 2019
This work finds that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions, and greatly benefits out-of-distribution detection on difficult, near-dist distribution outliers.
Deep One-Class Classification
- Computer ScienceICML
- 2018
This paper introduces a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective and shows the effectiveness of the method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs.
Learning Deep Features for One-Class Classification
- Computer ScienceIEEE Transactions on Image Processing
- 2019
A novel deep-learning-based approach for one-class transfer learning in which labeled data from an unrelated task is used for feature learning in one- class classification and achieves significant improvements over the state-of-the-art.
Object Detection in Optical Remote Sensing Images: A Survey and A New Benchmark
- Environmental Science, Computer ScienceISPRS Journal of Photogrammetry and Remote Sensing
- 2020
Support Vector Method for Novelty Detection
- Computer ScienceNIPS
- 1999
The algorithm is a natural extension of the support vector algorithm to the case of unlabelled data and is regularized by controlling the length of the weight vector in an associated feature space.
Algorithm as 136: A k-means clustering algorithm
- Journal of the Royal Statistical Society. Series C (Applied Statistics),
- 1979
Fashionmnist: a novel image dataset for benchmarking machine learning algorithms
- arXiv preprint arXiv:1708.07747,
- 2017
Group Anomaly Detection via Graph Autoencoders
- Computer Science
- 2019
Group Anomaly Detection via Graph Autoencoders (GADGA) is introduced, harnessing recent progress in graph representation learning to detect anomalous groups of points by exploiting their graph structure, rather than their raw set representation.