# Domain Adaptation with Conditional Transferable Components

@article{Gong2016DomainAW, title={Domain Adaptation with Conditional Transferable Components}, author={Mingming Gong and Kun Zhang and Tongliang Liu and Dacheng Tao and Clark Glymour and Bernhard Sch{\"o}lkopf}, journal={JMLR workshop and conference proceedings}, year={2016}, volume={48}, pages={ 2839-2848 } }

Domain adaptation arises in supervised learning when the training (source domain) and test (target domain) data have different distributions. Let X and Y denote the features and target, respectively, previous work on domain adaptation mainly considers the covariate shift situation where the distribution of the features P(X) changes across domains while the conditional distribution P(Y∣X) stays the same. To reduce domain discrepancy, recent methods try to find invariant components [Formula: see…

## 285 Citations

### Domain Generalization via Conditional Invariant Representations

- Computer Science, MathematicsAAAI
- 2018

This paper proposes to learn a feature representation which has domain-invariant class conditional distributions P(h(X)|Y), and proposes a conditional invariant representation which can be guaranteed if the class prior P(Y) does not change across training and test domains.

### Label-Noise Robust Domain Adaptation

- Computer ScienceICML
- 2020

This paper is the first to comprehensively investigate how label noise could adversely affect existing domain adaptation methods in various scenarios and theoretically prove that there exists a method that can essentially reduce the side-effect of noisy source labels in domain adaptation.

### Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?

- Computer ScienceNeurIPS
- 2021

This work develops an efficient technique in which the optimal map from X to Z also takes domain-specific information as input, in addition to the features X, by using the property of minimal changes of causal mechanisms across domains.

### Mapping conditional distributions for domain adaptation under generalized target shift

- Computer ScienceICLR
- 2022

A novel and general approach to align pretrained representations, which circumvents existing drawbacks and learns an optimal transport map, implemented as a NN, which maps source representations onto target ones.

### GENERALIZATION BOUNDS FOR DOMAIN ADAPTATION VIA DOMAIN TRANSFORMATIONS

- Computer Science2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP)
- 2018

It is shown that, under some conditions on the loss regularity, if the domain transformations reduce the distribution distance at a sufficiently high rate, then the expected target loss can be bounded with probability improving at an exponential rate with the number of labeled samples.

### Domain Adaptation by Joint Distribution Invariant Projections

- Computer ScienceIEEE Transactions on Image Processing
- 2020

The proposed approach exploits linear projections to directly match the source and target joint distributions under the $L^{2}$ -distance without the need to estimate the two joint distributions, leading to a quadratic problem with analytic solution.

### Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference

- Computer ScienceIJCAI
- 2021

This work proposes a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. p(x|y) via the prior distribution matching in a latent space, which also takes the marginal label shift p(y) into consideration with the posterior alignment.

### On Learning Invariant Representation for Domain Adaptation

- Computer ScienceArXiv
- 2019

This paper constructs a simple counterexample showing that, contrary to common belief, the above conditions are not sufficient to guarantee successful domain adaptation, and proposes a natural and interpretable generalization upper bound that explicitly takes into account the aforementioned shift.

### Transfer Learning with Label Noise

- Computer Science
- 2017

This paper proposes a novel Denoising Conditional Invariant Component (DCIC) framework, which provably ensures extracting invariant representations given examples with noisy labels in source domain and unlabeled examples in target domain with no bias.

### Partially-Shared Variational Auto-encoders for Unsupervised Domain Adaptation with Target Shift

- Computer ScienceECCV
- 2020

The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching to overcome the target shift problem in UDA.

## References

SHOWING 1-10 OF 37 REFERENCES

### Domain Adaptation under Target and Conditional Shift

- Computer ScienceICML
- 2013

This work considers domain adaptation under three possible scenarios, kernel embedding of conditional as well as marginal distributions, and proposes to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain.

### Unsupervised Domain Adaptation by Domain Invariant Projection

- Computer Science2013 IEEE International Conference on Computer Vision
- 2013

This paper learns a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized and demonstrates the effectiveness of the approach on the task of visual object recognition.

### Multi-Source Domain Adaptation: A Causal View

- Computer ScienceAAAI
- 2015

This paper uses causal models to represent the relationship between the features X and class label Y, and considers possible situations where different modules of the causal model change with the domain, to give an intuitive interpretation of the assumptions underlying certain previous methods.

### Connecting the Dots with Landmarks: Discriminatively Learning Domain-Invariant Features for Unsupervised Domain Adaptation

- Computer ScienceICML
- 2013

This paper automatically discovers the existence of landmarks and uses them to bridge the source to the target by constructing provably easier auxiliary domain adaptation tasks, and shows how this composition can be optimized discriminatively without requiring labels from the target domain.

### A Literature Survey on Domain Adaptation of Statistical Classifiers

- Computer Science
- 2007

This literature survey reviews existing work in both the machine learning and the natural language processing communities related to domain adaptation and shows the limitations of current work and points out promising directions that should be explored.

### Domain adaptation for object recognition: An unsupervised approach

- Computer Science2011 International Conference on Computer Vision
- 2011

This paper presents one of the first studies on unsupervised domain adaptation in the context of object recognition, where data has been labeled only from the source domain (and therefore do not have correspondences between object categories across domains).

### Learning Transferable Features with Deep Adaptation Networks

- Computer ScienceICML
- 2015

A new Deep Adaptation Network (DAN) architecture is proposed, which generalizes deep convolutional neural network to the domain adaptation scenario and can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding.

### Transfer Joint Matching for Unsupervised Domain Adaptation

- Computer Science2014 IEEE Conference on Computer Vision and Pattern Recognition
- 2014

This paper aims to reduce the domain difference by jointly matching the features and reweighting the instances across domains in a principled dimensionality reduction procedure, and construct new feature representation that is invariant to both the distribution difference and the irrelevant instances.

### Causal Transfer in Machine Learning

- Computer Science
- 2015

It is proved that in an adversarial setting using this subset of predictor variables for prediction is optimal if no examples from the test task are observed, and a practical method is introduced which allows for automatic inference of the above subset and provides corresponding code.

### A Survey on Transfer Learning

- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2010

The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.