• Corpus ID: 237267340

ECACL: A Holistic Framework for Semi-Supervised Domain Adaptation

@inproceedings{Li2021ECACLAH,
  title={ECACL: A Holistic Framework for Semi-Supervised Domain Adaptation},
  author={Kai Li and Chang Liu and Handong Zhao and Yulun Zhang and Yun Raymond Fu},
  year={2021}
}
  • Kai Li, Chang Liu, +2 authors Y. Fu
  • Published 19 April 2021
  • Computer Science
This paper studies Semi-Supervised Domain Adaptation (SSDA), a practical yet under-investigated research topic that aims to learn a model of good performance using unlabeled samples and a few labeled samples in the target domain, with the help of labeled samples from a source domain. Several SSDA methods have been proposed recently, which however fail to fully exploit the value of the few labeled target samples. In this paper, we propose Enhanced Categorical Alignment and Consistency Learning… 

Figures and Tables from this paper

Probability Contrastive Learning for Domain Adaptation
TLDR
A novel probability contrastive learning (PCL) is proposed, which not only produces compact features but also enforces them to be distributed around the class weights learned from source data.

References

SHOWING 1-10 OF 55 REFERENCES
Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation
TLDR
An SSDA framework that aims to align features via alleviation of the intra-domain discrepancy within the target domain is proposed and the incompatibility of the conventional adversarial perturbation methods with SSDA is demonstrated.
A simple baseline for domain adaptation using rotation prediction
TLDR
This work proposes a simple yet effective method based on self-supervised learning that outperforms or is on par with most state-of-the-art algorithms, e.g. adversarial domain adaptation.
Semi-Supervised Domain Adaptation via Minimax Entropy
TLDR
A novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model for semi-supervised domain adaptation (SSDA) setting, setting a new state of the art for SSDA.
Semi-supervised Domain Adaptation with Subspace Learning for visual recognition
TLDR
A novel domain adaptation framework, named Semi-supervised Domain Adaptation with Subspace Learning (SDASL), which jointly explores invariant low-dimensional structures across domains to correct data distribution mismatch and leverages available unlabeled target examples to exploit the underlying intrinsic information in the target domain.
Moment Matching for Multi-Source Domain Adaptation
TLDR
A new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M3SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions.
Fast Generalized Distillation for Semi-Supervised Domain Adaptation
TLDR
It is shown that without accessing the source data, GDSDA can effectively utilize the unlabeled data to transfer the knowledge from the source models to efficiently solve the SDA problem.
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Unified Deep Supervised Domain Adaptation and Generalization
TLDR
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture.
AutoDIAL: Automatic Domain Alignment Layers
TLDR
Opposite to previous works which define a priori in which layers adaptation should be performed, this method is able to automatically learn the degree of feature alignment required at different levels of the deep network.
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
...
1
2
3
4
5
...