• Corpus ID: 233307460

What is Wrong with One-Class Anomaly Detection?

@article{Park2021WhatIW,
  title={What is Wrong with One-Class Anomaly Detection?},
  author={Junekyu Park and Jeong-Hyeon Moon and Namhyuk Ahn and Kyung-ah Sohn},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.09793}
}
From a safety perspective, a machine learning method embedded in real-world applications is required to distinguish irregular situations. For this reason, there has been a growing interest in the anomaly detection (AD) task. Since we cannot observe abnormal samples for most of the cases, recent AD methods attempt to formulate it as a task of classifying whether the sample is normal or not. However, they potentially fail when the given normal samples are inherited from diverse semantic labels… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 29 REFERENCES

Robust, Deep and Inductive Anomaly Detection

TLDR
This paper addresses both issues in a single model, the robust autoencoder, which learns a nonlinear subspace that captures the majority of data points, while allowing for some data to have arbitrary corruption.

Deep One-Class Classification

TLDR
This paper introduces a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective and shows the effectiveness of the method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs.

One-Class Convolutional Neural Network

We present a novel convolutional neural network (CNN) based approach for one-class classification. The idea is to use a zero centered Gaussian noise in the latent space as the pseudo-negative class

Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks

TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.

Explaining and Harnessing Adversarial Examples

TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks

TLDR
This simple and efficient method of semi-supervised learning for deep neural networks is proposed, trained in a supervised fashion with labeled and unlabeled data simultaneously and favors a low-density separation between classes.

Learning De-biased Representations with Biased Representations

TLDR
This work proposes a novel framework to train a de-biased representation by encouraging it to be different from a set of representations that are biased by design, and demonstrates the efficacy of the method across a variety of synthetic and real-world biases.

Self-labelling via simultaneous clustering and representation learning

TLDR
The proposed novel and principled learning formulation is able to self-label visual data so as to train highly competitive image representations without manual labels and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline.