Corpus ID: 174803437

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

@article{Ovadia2019CanYT,
  title={Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift},
  author={Y. Ovadia and E. Fertig and J. Ren and Zachary Nado and D. Sculley and Sebastian Nowozin and Joshua V. Dillon and Balaji Lakshminarayanan and Jasper Snoek},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.02530}
}
  • Y. Ovadia, E. Fertig, +6 authors Jasper Snoek
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. [...] Key Result We find that traditional post-hoc…Expand Abstract
    194 Citations

    Figures and Topics from this paper.

    THE DISCRIMINATIVE JACKKNIFE: QUANTIFYING PREDICTIVE UNCERTAINTY VIA HIGHER-ORDER INFLUENCE FUNCTIONS
    • 2019
    • 1
    • Highly Influenced
    Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift
    Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions
    • 3
    • Highly Influenced
    • PDF
    Why have a Unified Predictive Uncertainty? Disentangling it using Deep Split Ensembles
    • 1
    • PDF
    Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
    • 1
    • PDF
    Probabilistic Neighbourhood Component Analysis: Sample Efficient Uncertainty Estimation in Deep Learning
    • 1
    • Highly Influenced
    • PDF
    Field-aware Calibration: A Simple and Empirically Strong Method for Reliable Probabilistic Predictions
    DEEP LEARNING UNCERTAINTY QUANTIFICATION PROCEDURES
    • 2020
    Empirical Frequentist Coverage of Deep Learning Uncertainty Quantification Procedures

    References

    SHOWING 1-10 OF 62 REFERENCES
    Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
    • 967
    • PDF
    Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
    • 2,404
    • Highly Influential
    • PDF
    What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
    • 1,140
    • PDF
    On Calibration of Modern Neural Networks
    • 946
    • PDF
    Does Your Model Know the Digit 6 Is Not a Cat? A Less Biased Evaluation of "Outlier" Detectors
    • 32
    • PDF
    Likelihood Ratios for Out-of-Distribution Detection
    • 77
    • PDF
    Dataset Shift in Machine Learning
    • 794
    • PDF
    Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
    • 103
    • PDF
    Obtaining Well Calibrated Probabilities Using Bayesian Binning
    • 213
    • PDF