• Corpus ID: 220128075

Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift

  title={Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift},
  author={Alex J. Chan and Ahmed M. Alaa and Zhaozhi Qian and Mihaela van der Schaar},
Modern neural networks have proven to be powerful function approximators, providing state-of-the-art performance in a multitude of applications. They however fall short in their ability to quantify confidence in their predictions - this is crucial in high-stakes applications that involve critical decision-making. Bayesian neural networks (BNNs) aim at solving this problem by placing a prior distribution over the network's parameters, thereby inducing a posterior distribution that encapsulates… 

Figures and Tables from this paper

Stratified Learning: a general-purpose statistical method for improved learning under Covariate Shift

It is shown that the effects of covariate shift can be reduced or altogether eliminated by conditioning on propensity scores, leading to balanced covariates and much-improved target prediction.

Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration

This work introduces a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term that results in well-calibrated and technically trustworthy predictions for a wide range of domain drifts, and substantially outperforms existing state-of-the-art approaches.

An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift

This paper proposes an approach for semi-supervised learning algorithms that is capable of addressing the issue of covariate shifts, and recovers some popular methods, including entropy minimization and pseudo-labeling.

Interpretable Policy Learning

A variational Bayesian approach to Direct Policy Learning is derived in order to appropriately handle uncertainty in decision making as well as introducing InterPoLe, and algorithm for Interpretable Policy Learning that uses evolving soft decision trees to generate personal and interpretable policies that can be easily inspected.

JAWS: Predictive Inference Under Covariate Shift

JAWS outperform the state-of-the-art predictive inference baselines in a variety of biased real world data sets for both interval-generation and risk-assessment auditing tasks and proposes JAW-R and JAWA-R as the repurposed versions of proposed methods for R isk assessment.

When and How Mixup Improves Calibration

This paper theoretically prove that Mixup improves calibration in high-dimensional settings by investigating natural statistical models and finds that the calibration benefit of Mixup increases as the model capacity increases.

Data-SUITE: Data-centric identification of in-distribution incongruous examples

Data-SUITE is empirically validate its performance and coverage guarantees and demonstrated on cross-site medical data, biased data, and data with concept drift, that Data-SUite best identifies ID regions where a downstream model may be reliable (independent of said model).

A Study on the Calibrated Confidence of Text Classification Using a Variational Bayes

This paper proposes a method that uses Variational Bayes to reduce the difference between accuracy and likelihood in text classification and proves that the proposed method within the significance level of 0.05 was more effective at calibrating the confidence than before.

Exploring Covariate and Concept Shift for Detection and Calibration of Out-of-Distribution Data

This work proposes to characterize the spectrum of OOD data using two types of distribution shifts: covariate shift and concept shift, and proposes a geometricallyinspired method (Geometric ODIN) to improve OOD detection under both shifts with only in-distribution data.

Hands-On Bayesian Neural Networks—A Tutorial for Deep Learning Users

This tutorial provides deep learning practitioners with an overview of the relevant literature and a complete toolset to design, implement, train, use and evaluate Bayesian neural networks, i.e., stochastic artificial neural networks trained using Bayesian methods.



Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

A large-scale benchmark of existing state-of-the-art methods on classification problems and the effect of dataset shift on accuracy and calibration is presented, finding that traditional post-hoc calibration does indeed fall short, as do several other previous methods.

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.

Predictive Uncertainty Estimation via Prior Networks

This work proposes a new framework for modeling predictive uncertainty called Prior Networks (PNs) which explicitly models distributional uncertainty by parameterizing a prior distribution over predictive distributions and evaluates PNs on the tasks of identifying out-of-distribution samples and detecting misclassification on the MNIST dataset, where they are found to outperform previous methods.

Mixture Regression for Covariate Shift

The main advantages of this new formulation over previous models for covariate shift are that the test and training densities are known, the regression and density estimation are combined into a single procedure, and previous methods are reproduced as special cases of this procedure, shedding light on the implicit assumptions the methods are making.

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Bayesian inference with posterior regularization and applications to infinite latent SVMs

Regularized Bayesian inference (RegBayes), a novel computational framework that performs posterior inference with a regularization term on the desired post-data posterior distribution under an information theoretical formulation, is presented.

Constructing informative priors using transfer learning

An algorithm for automatically constructing a multivariate Gaussian prior with a full covariance matrix for a given supervised learning task, which relaxes a commonly used but overly simplistic independence assumption, and allows parameters to be dependent.

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling

This work benchmarks well-established and recently developed methods for approximate posterior sampling combined with Thompson Sampling over a series of contextual bandit problems and finds that many approaches that have been successful in the supervised learning setting underperformed in the sequential decision-making scenario.

Optimal Bayesian Transfer Learning

A joint Wishart distribution for the precision matrices of the Gaussian feature-label distributions in the source and target domains to act like a bridge that transfers the useful information of the source domain to help classification in the target domain by improving the target posteriors is defined.