• Corpus ID: 239998671

A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges

  title={A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges},
  author={Mohammadreza Salehi and Hossein Mirzaei and Dan Hendrycks and Yixuan Li and Mohammad Hossein Rohban and M. Sabokrou},
Machine learning models often encounter samples that are diverged from the training distribution. Failure to recognize an out-of-distribution (OOD) sample, and consequently assign that sample to an in-class label significantly compromises the reliability of a model. The problem has gained significant attention due to its importance for safety deploying models in open-world settings. Detecting OOD samples is challenging due to the intractability of modeling all possible unknown distributions. To… 
1 Citations
Neural Mean Discrepancy for Efficient Out-of-Distribution Detection
A novel metric called Neural Mean Discrepancy (NMD), which compares neural means of the input examples and training data is proposed, which outperforms state-of-the-art OOD approaches across multiple datasets and model architectures in terms of both detection accuracy and computational cost.


Anomalous Instance Detection in Deep Learning: A Survey
A taxonomy for existing techniques based on their underlying assumptions and adopted approaches is provided and various techniques in each of the categories are discussed and the relative strengths and weaknesses of the approaches are provided.
Deep Learning for Anomaly Detection: A Survey
This survey presents a structured and comprehensive overview of research methods in deep learning-based anomaly detection, grouping state-of-the-art deep anomaly detection research techniques into different categories based on the underlying assumptions and approach adopted.
Self-Supervised Learning for Generalizable Out-of-Distribution Detection
This work proposes a new technique relying on self-supervision for generalizable out-of-distribution (OOD) feature learning and rejecting those samples at the inference time, which does not need to pre-know the distribution of targeted OOD samples and incur no extra overheads compared to other methods.
Provably Robust Detection of Out-of-distribution Data (almost) for free
This paper proposes a novel method where from first principles it is proposed to combine a certifiable OOD detector with a standard classifier into an OOD aware classifier that provably avoids the asymptotic overconfidence problem of standard neural networks.
On the Impact of Spurious Correlation for Out-of-distribution Detection
A new formalization is presented and model the data shifts by taking into account both the invariant and environmental features, suggesting that the detection performance is severely worsened when the correlation between spurious features and labels is increased in the training set.
Deep Semi-Supervised Anomaly Detection
This work presents Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection, and introduces an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy the anomalous distribution, which can serve as a theoretical interpretation for the method.
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
It is demonstrated that applying differential privacy can improve the utility of outlier detection and novelty detection, with an extension to detect poisoning samples in backdoor attacks.
Understanding the Effect of Bias in Deep Anomaly Detection
The first finite sample rates for estimating the relative scoring bias for deep anomaly detection, and empirically validate the theoretical results on both synthetic and real-world datasets are established.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.
Deep Anomaly Detection with Outlier Exposure
In extensive experiments on natural language processing and small- and large-scale vision tasks, it is found that Outlier Exposure significantly improves detection performance and that cutting-edge generative models trained on CIFar-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; OE is used to mitigate this issue.