# $\Delta$-UQ: Accurate Uncertainty Quantification via Anchor Marginalization

@inproceedings{Anirudh2021DeltaUQAU, title={\$\Delta\$-UQ: Accurate Uncertainty Quantification via Anchor Marginalization}, author={Rushil Anirudh and Jayaraman J. Thiagarajan}, year={2021} }

We present ∆-UQ – a novel, general-purpose uncertainty estimator using the concept of anchoring in predictive models. Anchoring works by first transforming the input into a tuple consisting of an anchor point drawn from a prior distribution, and a combination of the input sample with the anchor using a pretext encoding scheme. This encoding is such that the original input can be perfectly recovered from the tuple – regardless of the choice of the anchor. Therefore, any predictive model should… Expand

#### Figures and Tables from this paper

#### References

SHOWING 1-10 OF 24 REFERENCES

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

- Computer Science, Mathematics
- NeurIPS
- 2019

A large-scale benchmark of existing state-of-the-art methods on classification problems and the effect of dataset shift on accuracy and calibration is presented, finding that traditional post-hoc calibration does indeed fall short, as do several other previous methods. Expand

Improving model calibration with accuracy versus uncertainty optimization

- Computer Science
- NeurIPS
- 2020

This work introduces a differentiable accuracy versus uncertainty calibration (AvUC) loss function that allows a model to learn to provide well-calibrated uncertainties, in addition to improved accuracy, and proposes an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration. Expand

DEUP: Direct Epistemic Uncertainty Prediction

- Computer Science, Mathematics
- ArXiv
- 2021

This work proposes a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability, which can be applied in non-stationary learning environments arising in active learning or reinforcement learning. Expand

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

- Mathematics, Computer Science
- NIPS
- 2017

This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Expand

Single-Model Uncertainties for Deep Learning

- Computer Science
- NeurIPS
- 2019

This work proposes Simultaneous Quantile Regression (SQR), a loss function to learn all the conditional quantiles of a given target variable, which can be used to compute well-calibrated prediction intervals. Expand

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

- Mathematics, Computer Science
- ICML
- 2016

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand

What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

- Computer Science
- NIPS
- 2017

A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks. Expand

On Calibration of Modern Neural Networks

- Computer Science, Mathematics
- ICML
- 2017

It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions. Expand

Model Inversion Networks for Model-Based Optimization

- Computer Science, Mathematics
- NeurIPS
- 2020

This work proposes to address data-driven optimization problems with model inversion networks (MINs), which learn an inverse mapping from scores to inputs, which can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems. Expand

Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

- Computer Science, Mathematics
- ICML
- 2010

This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions. Expand