• Corpus ID: 253224056

Nonparametric Uncertainty Quantification for Single Deterministic Neural Network

@inproceedings{Kotelevskii2022NonparametricUQ,
  title={Nonparametric Uncertainty Quantification for Single Deterministic Neural Network},
  author={Nikita Kotelevskii and Aleksandr Artemenkov and Kirill Fedyanin and Fedor Noskov and Alexander Fishkov and Artem Shelmanov and Artem Vazhentsev and Aleksandr Petiushko and Maxim Panov},
  year={2022}
}
This paper proposes a fast and scalable method for uncertainty quantification of machine learning models’ predictions. First, we show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson’s nonparametric estimate of the conditional label distribution. Importantly, the proposed approach allows to disentangle explicitly aleatoric and epistemic uncertainties. The resulting method works directly in the feature space. However, one can apply it to any… 

References

SHOWING 1-10 OF 69 REFERENCES

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Uncertainty Estimation Using a Single Deep Deterministic Neural Network-ML Reproducibility Challenge 2020

  • Computer Science
  • 2021
RBF network when trained with BCE loss along with two-sided gradient penalty 2 outperforms deep ensemble in the task of out of distribution (OoD) detection along with competitive accuracy to softmax 4 based models.

Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

It is shown that a single softmax neural net with minimal changes can beat the uncertainty predictions of Deep Ensembles and other more complex single-forward-pass uncertainty approaches and it is necessary to combine this density with the softmax entropy to disentangle aleatoric and epistemic uncertainty—crucial e.g. for active learning.

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

It is found that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work.

Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness

Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs, by adding a weight normalization step during training and replacing the output layer with a Gaussian process and outperforms the other single-model approaches.

Neural Network Acceptability Judgments

This paper introduces the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature, and trains several recurrent neural network models on acceptability classification, and finds that the authors' models outperform unsupervised models by Lau et al. (2016) on CoLA.

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.

Improving Deterministic Uncertainty Estimation in Deep Learning for Classification and Regression

We propose a new model that estimates uncertainty in a single forward pass and works on both classification and regression problems. Our approach combines a bi-Lipschitz feature extractor with an

Likelihood Ratios and Generative Classifiers for Unsupervised Out-of-Domain Detection In Task Oriented Dialog

This work is hitherto the first to investigate the use of generative classifier and computing a marginal likelihood (ratio) for OOD detection at test-time and finds that this approach outperforms both simple likelihood (Ratio) based and other prior approaches.

An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction

A new dataset is introduced that includes queries that are out-of-scope—i.e., queries that do not fall into any of the system’s supported intents, posing a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class.
...