Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference

  title={Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference},
  author={Xiaofeng Liu and Bo Hu and Linghao Jin and Xu Han and Fangxu Xing and Jinsong Ouyang and Jun Lu and Georges El Fakhri and Jonghye Woo},
  • Xiaofeng Liu, Bo Hu, +6 authors Jonghye Woo
  • Published in IJCAI 22 July 2021
  • Computer Science
In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of p(x|y) and p(y). However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. p(x), which rests on an unrealistic assumption that p(y) is invariant across domains… Expand

Figures and Tables from this paper

Adversarial Unsupervised Domain Adaptation with Conditional and Label Shift: Infer, Align and Iterate
  • Xiaofeng Liu, Zhenhua Guo, +5 authors Jonghye Woo
  • Computer Science
  • ArXiv
  • 2021
In this work, we propose an adversarial unsupervised domain adaptation (UDA) approach with the inherent conditional and label shifts, in which we aim to align the distributions w.r.t. both p(x|y) andExpand
Generative Self-training for Cross-domain Unsupervised Tagged-to-Cine MRI Synthesis
This work proposes a novel generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis, and proposes to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning. Expand
Adapting Off-the-Shelf Source Segmenter for Target Medical Image Segmentation
This work targets source free UDA for segmentation, and proposes to adapt an "off-the-shelf" segmentation model pre-trained in the source domain to the target domain, with an adaptive batch-wise normalization statistics adaptation framework. Expand
Recursively Conditional Gaussian for Ordinal Unsupervised Domain Adaptation
A recursively conditional Gaussian set is adapted for ordered constraint modeling, which admits a tractable joint distribution prior to the latent space and is able to control the density of content vector that violates the poset constraints by a simple “three-sigma rule”. Expand


Deep Domain Generalization via Conditional Invariant Adversarial Networks
This work proposes an end-to-end conditional invariant deep domain generalization approach by leveraging deep neural networks for domain-invariant representation learning and proves the effectiveness of the proposed method. Expand
Domain Adaptation under Target and Conditional Shift
This work considers domain adaptation under three possible scenarios, kernel embedding of conditional as well as marginal distributions, and proposes to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Expand
Domain Adaptation with Conditional Transferable Components
This paper aims to extract conditional transferable components whose conditional distribution is invariant after proper location-scale (LS) transformations, and identifies how P(Y) changes between domains simultaneously. Expand
Learning to Learn with Variational Information Bottleneck for Domain Generalization
A probabilistic meta-learning model for domain generalization is introduced, in which classifier parameters shared across domains are modeled as distributions, which enables better handling of prediction uncertainty on unseen domains. Expand
Generalizing Across Domains via Cross-Gradient Training
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training. Expand
Learning from Extrinsic and Intrinsic Supervisions for Domain Generalization
A new domain generalization framework that learns how to generalize across domains simultaneously from extrinsic relationship supervision and intrinsic self-supervision for images from multi-source domains is presented. Expand
Domain Generalization Using a Mixture of Multiple Latent Domains
This paper proposes a method that iteratively divides samples into latent domains via clustering, and which trains the domain-invariant feature extractor shared among the divided latent domains through adversarial learning, which outperforms conventional domain generalization methods, including those that utilize domain labels. Expand
Domain Generalization with Adversarial Feature Learning
This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework. Expand
Unified Deep Supervised Domain Adaptation and Generalization
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture. Expand
Mutual Information Regularized Feature-level Frankenstein for Discriminative Recognition.
This paper proposes a novel approach to explicitly enforce the extracted discriminative representation d, extracted latent variation l, and semantic variation label vector s to be independent and complementary to each other to avoid unstable adversarial training. Expand