Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders

@article{Mller2021OutofdistributionDA,
  title={Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders},
  author={Felix M{\"o}ller and Diego Botache and Denis Huseljic and Florian Heidecker and Maarten Bieshaar and Bernhard Sick},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year={2021},
  pages={46-55}
}
  • Felix Möller, Diego Botache, B. Sick
  • Published 4 May 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Deep neural networks often suffer from overconfidence which can be partly remedied by improved out-of-distribution detection. For this purpose, we propose a novel approach that allows for the generation of out-of-distribution datasets based on a given in-distribution dataset. This new dataset can then be used to improve out-of-distribution detection for the given dataset and machine learning task at hand. The samples in this dataset are with respect to the feature space close to the in… 

Figures and Tables from this paper

Neural Mean Discrepancy for Efficient Out-of-Distribution Detection
TLDR
A novel metric called Neural Mean Discrepancy (NMD), which compares neural means of the input examples and training data is proposed, which outperforms state-of-the-art OOD approaches across multiple datasets and model architectures in terms of both detection accuracy and computational cost.
Description of Corner Cases in Automated Driving: Goals and Challenges
TLDR
A brief overview of the challenges and goals of a machine-interpretable description of corner cases, an essential part of the data for automated driving systems development, are given.

References

SHOWING 1-10 OF 47 REFERENCES
Out-of-distribution Detection in Classifiers via Generation
TLDR
A novel algorithm to generate out-of-distribution samples using a manifold learning network and then train an n+1 classifier for OOD detection, where the $n+1^{th}$ class represents the OOD samples is proposed.
Likelihood Ratios for Out-of-Distribution Detection
TLDR
This work investigates deep generative model based approaches for OOD detection and observes that the likelihood score is heavily affected by population level background statistics, and proposes a likelihood ratio method forDeep generative models which effectively corrects for these confounding background statistics.
Robust Out-of-distribution Detection in Neural Networks
TLDR
An effective algorithm called ALOE is proposed, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples, and substantially improves the robustness of state-of-the-art OOD detection.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
TLDR
A novel training method for classifiers so that such inference algorithms can work better, and it is demonstrated its effectiveness using deep convolutional neural networks on various popular image datasets.
Building robust classifiers through generation of confident out of distribution examples
TLDR
This paper introduces an alternative GAN based approach for building a robust classifier, where the idea is to use the GAN to explicitly generate out of distribution samples that the classifier is confident on (low entropy), and have the classifiers maximize the entropy for these samples.
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness
TLDR
This paper investigates using Prior Networks to detect adversarial attacks and proposes a generalized form of adversarial training, and shows that the appropriate training criterion for Prior Networks is the reverse KL-divergence between Dirichlet distributions.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
...
...