• Corpus ID: 245502819

Distributionally Robust Learning for Uncertainty Calibration under Domain Shift

  title={Distributionally Robust Learning for Uncertainty Calibration under Domain Shift},
  author={Haoxu Wang and Anqi Liu and Zhiding Yu and Junchi Yan and Yisong Yue and Anima Anandkumar},
We propose a framework for learning calibrated uncertainties under domain shifts. We consider the case where the source (training) distribution differs from the target (test) distribution. We detect such domain shifts through the use of binary domain classifier and integrate it with the task network and train them jointly endto-end. The binary domain classifier yields a density ratio that reflects the closeness of a target (test) sample to the source (training) distribution. We employ it to… 


Regularized Learning for Domain Adaptation under Label Shifts
We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target
Unsupervised Domain Adaptation via Calibrating Uncertainties
This work proposes a general Renyi entropy regularization framework and employs variational Bayes learning for reliable uncertainty estimation and calibrating the sample variance of network parameters serves as a plug-in regularizer for training.
A DIRT-T Approach to Unsupervised Domain Adaptation
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.
Model Uncertainty for Unsupervised Domain Adaptation
  • Joonho Lee, Gyemin Lee
  • Computer Science
    2020 IEEE International Conference on Image Processing (ICIP)
  • 2020
This paper presents a novel method that learns feature representations that minimize the domain divergence using model uncertainty and employs Bayesian approach and provides an efficient way of evaluating model uncertainty loss using Monte Carlo dropout sampling.
Adversarial Dropout Regularization
This work proposes a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain, using dropout on the classifier network.
Confidence Regularized Self-Training
A confidence regularized self-training (CRST) framework, formulated as regularizedSelf-training, that treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization and proposes two types of confidence regularization: label regularization (LR) and modelRegularization (MR).
Attending to Discriminative Certainty for Domain Adaptation
This paper observes that just by incorporating the probabilistic certainty of the discriminator while training the classifier, the method is able to obtain state of the art results on various datasets as compared against all the recent methods.
Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
A new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries by maximizing the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source.
Robustness to Adversarial Perturbations in Learning from Incomplete Data
A generalization theory is developed for Semi-Supervised Learning and Distributionally Robust Learning based on a number of novel complexity measures, such as an adversarial extension of Rademacher complexity and its semi-supervised analogue.
Generate to Adapt: Aligning Domains Using Generative Adversarial Networks
This work proposes an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space by inducing a symbiotic relationship between the learned embedding and a generative adversarial network.