A Review of Domain Adaptation without Target Labels

  title={A Review of Domain Adaptation without Target Labels},
  author={Wouter M. Kouw and M. Loog},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  • Wouter M. KouwM. Loog
  • Published 16 January 2019
  • Computer Science
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
Domain adaptation has become a prominent problem setting in machine learning and related fields. [] Key Method Sample-based methods focus on weighting individual observations during training based on their importance to the target domain.

Figures from this paper

Domain Adaptation with Auxiliary Target Domain-Oriented Classifier

A new pseudo-labeling framework called Auxiliary Target Domain-Oriented Classifier (ATDOC) is proposed, which alleviates the classifier bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.

Coupled Training for Multi-Source Domain Adaptation

  • Ohad AmosyGal Chechik
  • Computer Science
    2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
  • 2022
This work proposes an alternative, soft sharing scheme for multi-source domain adaptation, which consistently outperforms current MSDA SoTA and shows analytically and empirically that the decision boundaries of the target model converge to low-density "valleys" of thetarget distribution.

Learning Transferable Parameters for Unsupervised Domain Adaptation

Transferable Parameter Learning (TransPar) is proposed to reduce the side effect of domain-specific information in the learning process and thus enhance the memorization ofdomain-invariant information.

Combating Domain Shift with Self-Taught Labeling

Self-Taught Labeling (SeTL) is proposed, a new regularization approach that finds an auxiliary target-specific classifier for unlabeled data and significantly outperforms existing domain alignment techniques on a large variety of domain adaptation benchmarks.

Feed-Forward Source-Free Latent Domain Adaptation via Cross-Attention

This work focuses on the setting of feed-forward source-free domain adaptation, where adaptation should not require access to the source dataset, and also be back propagation-free, and suggests that human annotated domain labels may not always be optimal, and raises the possibility of doing better through automated instance selection.

Domain Adaptation with Optimal Transport for Extended Variable Space

A learning bound in the target domain for the proposed OT-based method is derived and it is assumed that common features exist in both domains and that extra (new additional) features are observed in thetarget domain; hence, the dimensionality of the targetdomain is higher than that of the source domain.

Teacher-Student Consistency For Multi-Source Domain Adaptation

Evaluations of MUST on three MSDA benchmarks: digits, text sentiment analysis, and visual-object recognition show that MUST outperforms current SoTA, sometimes by a very large margin.

Informative Class-Conditioned Feature Alignment for Unsupervised Domain Adaptation

A novel Informative Class-Conditioned Feature Alignment (IC2FA) approach for UDA, which equips class-conditioned feature alignment with informative feature disentanglement and causes the two procedures to work cooperatively, which facilitates informative discriminative features adaptation.

Multi-source Domain Adaptation via Weighted Joint Distributions Optimal Transport

This paper addresses the problem of domain adaptation on an unlabeled target dataset using knowledge from multiple labelled source datasets from a new perspective, and exploits the diversity of source distributions by tuning their weights to the target task at hand.

Unsupervised Domain Adaptation for Extra Features in the Target Domain Using Optimal Transport

A learning bound in the target domain for the proposed OT-based method is derived, and the adaptation between these source and target domains is formulated as an optimal transport (OT) problem.



Information-Theoretical Learning of Discriminative Clusters for Unsupervised Domain Adaptation

While the method identifies a feature space where data in the source and the target domains are similarly distributed, it also learns the feature space discriminatively, optimizing an information-theoretic metric as an proxy to the expected misclassification error on the target domain.

Co-Training for Domain Adaptation

An algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident, and is named CODA (Co-training for domain adaptation).

Regularized Learning for Domain Adaptation under Label Shifts

We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target

Domain adaptation of weighted majority votes via perturbed variation-based self-labeling

Feature-Level Domain Adaptation

The empirical evaluation of FLDA focuses on problems comprising binary and count data in which the transfer can be naturally modeled via a dropout distribution, which allows the classifier to adapt to differences in the marginal probability of features in the source and the target domain.

Domain Adaptation with Coupled Subspaces

This work formalizes the intuition that if the authors can link target-specific features to source features, they can learn effectively using only source labeled data and gives finite sample target error bounds and an algorithm which performs at the state-of-the-art on two natural language processing adaptation tasks which are characterized by novel target features.

Domain Adaptation via Transfer Component Analysis

This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.

Joint cross-domain classification and subspace learning for unsupervised adaptation

A DIRT-T Approach to Unsupervised Domain Adaptation

Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.

A survey of multi-source domain adaptation