• Corpus ID: 245502819

Distributionally Robust Learning for Uncertainty Calibration under Domain Shift

@inproceedings{Wang2020DistributionallyRL,
  title={Distributionally Robust Learning for Uncertainty Calibration under Domain Shift},
  author={Haoxu Wang and Anqi Liu and Zhiding Yu and Junchi Yan and Yisong Yue and Anima Anandkumar},
  year={2020}
}
We propose a framework for learning calibrated uncertainties under domain shifts. We consider the case where the source (training) distribution differs from the target (test) distribution. We detect such domain shifts through the use of binary domain classifier and integrate it with the task network and train them jointly endto-end. The binary domain classifier yields a density ratio that reflects the closeness of a target (test) sample to the source (training) distribution. We employ it to… 

References

SHOWING 1-10 OF 86 REFERENCES
Regularized Learning for Domain Adaptation under Label Shifts
We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target
Transferable Calibration with Lower Bias and Variance in Domain Adaptation
TLDR
The dilemma that DA models learn higher accuracy at the expense of well-calibrated probabilities is revealed, and Transferable Calibration (TransCal) is proposed to tackle this dilemma, achieving accurate calibration with lower bias and variance in a unified hyperparameter-free optimization framework.
Unsupervised Domain Adaptation via Calibrating Uncertainties
TLDR
This work proposes a general Renyi entropy regularization framework and employs variational Bayes learning for reliable uncertainty estimation and calibrating the sample variance of network parameters serves as a plug-in regularizer for training.
A DIRT-T Approach to Unsupervised Domain Adaptation
TLDR
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.
Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation
TLDR
This work proposes an algorithm for calibrating predictions that accounts for the possibility of covariate shift, given labeled examples from the training distribution and unlabeledExamples from the real-world distribution.
FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation
TLDR
A fixed ratio-based mixup is introduced to augment multiple intermediate domains between the source and target domain and gradually transfer domain knowledge from the source to the target domain.
Model Uncertainty for Unsupervised Domain Adaptation
  • Joonho Lee, Gyemin Lee
  • Computer Science
    2020 IEEE International Conference on Image Processing (ICIP)
  • 2020
TLDR
This paper presents a novel method that learns feature representations that minimize the domain divergence using model uncertainty and employs Bayesian approach and provides an efficient way of evaluating model uncertainty loss using Monte Carlo dropout sampling.
Adversarial Dropout Regularization
TLDR
This work proposes a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain, using dropout on the classifier network.
Confidence Regularized Self-Training
TLDR
A confidence regularized self-training (CRST) framework, formulated as regularizedSelf-training, that treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization and proposes two types of confidence regularization: label regularization (LR) and modelRegularization (MR).
Attending to Discriminative Certainty for Domain Adaptation
TLDR
This paper observes that just by incorporating the probabilistic certainty of the discriminator while training the classifier, the method is able to obtain state of the art results on various datasets as compared against all the recent methods.
...
...