Corpus ID: 238354318

$\Delta$-UQ: Accurate Uncertainty Quantification via Anchor Marginalization

@inproceedings{Anirudh2021DeltaUQAU,
  title={\$\Delta\$-UQ: Accurate Uncertainty Quantification via Anchor Marginalization},
  author={Rushil Anirudh and Jayaraman J. Thiagarajan},
  year={2021}
}
We present ∆-UQ – a novel, general-purpose uncertainty estimator using the concept of anchoring in predictive models. Anchoring works by first transforming the input into a tuple consisting of an anchor point drawn from a prior distribution, and a combination of the input sample with the anchor using a pretext encoding scheme. This encoding is such that the original input can be perfectly recovered from the tuple – regardless of the choice of the anchor. Therefore, any predictive model should… Expand

References

SHOWING 1-10 OF 24 REFERENCES
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
TLDR
A large-scale benchmark of existing state-of-the-art methods on classification problems and the effect of dataset shift on accuracy and calibration is presented, finding that traditional post-hoc calibration does indeed fall short, as do several other previous methods. Expand
Improving model calibration with accuracy versus uncertainty optimization
TLDR
This work introduces a differentiable accuracy versus uncertainty calibration (AvUC) loss function that allows a model to learn to provide well-calibrated uncertainties, in addition to improved accuracy, and proposes an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration. Expand
DEUP: Direct Epistemic Uncertainty Prediction
TLDR
This work proposes a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability, which can be applied in non-stationary learning environments arising in active learning or reinforcement learning. Expand
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Expand
Single-Model Uncertainties for Deep Learning
TLDR
This work proposes Simultaneous Quantile Regression (SQR), a loss function to learn all the conditional quantiles of a given target variable, which can be used to compute well-calibrated prediction intervals. Expand
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. Expand
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks. Expand
On Calibration of Modern Neural Networks
TLDR
It is discovered that modern neural networks, unlike those from a decade ago, are poorly calibrated, and on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions. Expand
Model Inversion Networks for Model-Based Optimization
TLDR
This work proposes to address data-driven optimization problems with model inversion networks (MINs), which learn an inverse mapping from scores to inputs, which can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems. Expand
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
TLDR
This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions. Expand
...
1
2
3
...