Corpus ID: 166227796

Hyperparameter-Free Out-of-Distribution Detection Using Softmax of Scaled Cosine Similarity

@article{Techapanurak2019HyperparameterFreeOD,
  title={Hyperparameter-Free Out-of-Distribution Detection Using Softmax of Scaled Cosine Similarity},
  author={Engkarat Techapanurak and Takayuki Okatani},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.10628}
}
The ability to detect out-of-distribution (OOD) samples is vital to secure the reliability of deep neural networks in real-world applications. Considering the nature of OOD samples, detection methods should not have hyperparameters that need to be tuned depending on incoming OOD samples. However, most of the recently proposed methods do not meet this requirement, leading to compromised performance in real-world applications. In this paper, we propose a simple, hyperparameter-free method based… Expand
Neural Networks Out-of-Distribution Detection: Hyperparameter-Free Isotropic Maximization Loss, The Principle of Maximum Entropy, Cold Training, and Branched Inferences
TLDR
This paper modified the original IsoMax loss to improve ODD performance while maintaining benefits such as high classification accuracy, fast and energy-efficient inference, and scalability, and replaced the global hyperparameter with learnable parameters to increase performance. Expand
Entropic Out-of-Distribution Detection.
TLDR
IsoMax loss works as a seamless SoftMax loss drop-in replacement that significantly improves neural networks OOD detection performance and may be used as a baseline solution to be combined with current or future out-of-distribution detection techniques to achieve even higher results. Expand
Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data
TLDR
This work bases its work on a popular method ODIN, proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance, and proposing to decompose confidence scoring as well as a modified input pre-processing method. Expand
Bridging In- and Out-of-distribution Samples for Their Better Discriminability
This paper proposes a method for OOD detection. Questioning the premise of previous studies that ID and OOD samples are separated distinctly, we consider samples lying in the intermediate of the twoExpand
Distinction Maximization Loss: Fast, Scalable, Turnkey, and Native Neural Networks Out-of-Distribution Detection simply by Replacing the SoftMax Loss
TLDR
It is argued that neural networks low out-of-distribution detection performance is mainly due to the SoftMax loss anisotropy, and an isotropic loss is built to reduce neural networks uncertainty in a fast, scalable, turnkey, and native approach. Expand
Anomalous Instance Detection in Deep Learning: A Survey
TLDR
A taxonomy for existing techniques based on their underlying assumptions and adopted approaches is provided and various techniques in each of the categories are discussed and the relative strengths and weaknesses of the approaches are provided. Expand
Practical Evaluation of Out-of-Distribution Detection Methods for Image Classification
TLDR
This paper experimentally evaluating the performance of representative OOD detection methods for three scenarios, i.e., irrelevant input detection, novel class detection, and domain shift detection, on various datasets and classification tasks shows that differences in scenarios and datasets alter the relative performance among the methods. Expand
Cats Are Not Fish: Deep Learning Testing Calls for Out-Of-Distribution Awareness
TLDR
A large scale empirical study is conducted, with a total of 451 experiment configurations, 42 deep neural networks and 1.2 million test data instances, to investigate and characterize the impact of OOD-awareness on DL testing, and confirms that introducing data distribution awareness in both testing and enhancement phases outperforms distribution unaware retraining by up to 21.5%. Expand
Isotropy Maximization Loss and Entropic Score: Accurate, Fast, Efficient, Scalable, and Turnkey Neural Networks Out-of-Distribution Detection Based on The Principle of Maximum Entropy.
TLDR
IsoMax loss works as a seamless SoftMax loss drop-in replacement that keeps the overall solution accurate, fast, efficient, scalable, and turnkey, and the results showed that the straightforward approach is competitive against state-of-the-art solutions besides avoiding previous methods undesired drawbacks. Expand
Detecting Out-of-Distribution Examples with Gram Matrices
TLDR
For all channel-pairs, if any of the computed correlation values are greater (or lesser) than corresponding the maximum (or minimum) value extracted for training data points classified as , the extent of deviation is noted. Expand
...
1
2
...

References

SHOWING 1-10 OF 55 REFERENCES
A Less Biased Evaluation of Out-of-distribution Sample Detectors
TLDR
OD-test, a three-dataset evaluation scheme as a more reliable strategy to assess progress on the problem of out-of-distribution samples in deep learning, and shows that the previous techniques have low accuracy and are not reliable in practice. Expand
Likelihood Ratios for Out-of-Distribution Detection
TLDR
This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics. Expand
Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy
  • Qing Yu, K. Aizawa
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
A two-head deep convolutional neural network is proposed and maximize the discrepancy between the two classifiers to detect OOD inputs and significantly outperforms other state-of-the-art methods on several OOD detection benchmarks and two cases of real-world simulation. Expand
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets. Expand
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin. Expand
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
TLDR
This paper proposes a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier, and obtains the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis. Expand
Out-of-Distribution Detection using Multiple Semantic Label Representations
TLDR
This work proposes to use multiple semantic dense representations instead of sparse representation as the target label for out-of-distribution detection in neural networks, and evaluated the proposed model on computer vision, and speech commands detection tasks and compared it to previous methods. Expand
Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers
TLDR
A novel margin-based loss over the softmax output which seeks to maintain at least a margin m between the average entropy of the OOD and in-distribution samples and a novel method to combine the outputs of the ensemble of classifiers to obtain OOD detection score and class prediction. Expand
NormFace: L2 Hypersphere Embedding for Face Verification
TLDR
This work identifies and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings, and proposes two strategies for training using normalized features. Expand
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
TLDR
The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. Expand
...
1
2
3
4
5
...