Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation

@article{Li2021CrossDomainAC,
  title={Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation},
  author={Jichang Li and Guanbin Li and Yemin Shi and Yizhou Yu},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={2505-2514}
}
  • Jichang Li, Guanbin Li, Yizhou Yu
  • Published 19 April 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
In semi-supervised domain adaptation, a few labeled samples per class in the target domain guide features of the remaining target samples to aggregate around them. However, the trained model cannot produce a highly discriminative feature representation for the target domain because the training data is dominated by labeled samples from the source domain. This could lead to disconnection between the labeled and unlabeled target samples as well as misalignment between unlabeled target samples and… 

Figures and Tables from this paper

Semi-supervised Domain Adaptive Structure Learning
TLDR
An adaptive structure learning method to regularize the cooperation of SSL and DA, inspired by the multi-views learning, that applies the maximum mean discrepancy (MMD) distance minimization and self-training (ST) to project the contradictory structures into a shared view to make the reliable final decision.
Multi-level Consistency Learning for Semi-supervised Domain Adaptation
TLDR
A Multi-level Consistency Learning (MCL) framework for SSDA, which regularizes the consistency of different views of target domain samples at three levels and facilitates the learning of both discriminative and compact target feature representations.
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation
TLDR
This paper designs a two-way centeraware labeling algorithm to produce pseudo labels for samples in target domain, and a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-att attention for source/target feature learning and source-target domain alignment, respectively.
Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining and Consistency
TLDR
The Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
Probability Contrastive Learning for Domain Adaptation
TLDR
A novel probability contrastive learning (PCL) is proposed, which not only produces compact features but also enforces them to be distributed around the class weights learned from source data.
Semantic-aware Representation Learning Via Probability Contrastive Loss
TLDR
A novel probability contrastive learning (PCL) is proposed, which not only produces rich features but also enforces them to be distributed around the class prototypes to exploit the class semantics during optimization.
Adversarial Contrastive Learning by Permuting Cluster Assignments
TLDR
This work proposes SwARo, an adversarial contrastive framework that incorporates cluster assignment permutations to generate representative adversarial samples and evaluates it on multiple benchmark datasets and against various white-box and black-box attacks, obtaining consistent improvements over state-of-the-art baselines.
PointMatch: A Consistency Training Framework for Weakly SupervisedSemantic Segmentation of 3D Point Clouds
TLDR
This work proposes a novel framework, PointMatch, that stands on both data and label, by applying consistency regularization to sufficiently probe information from data itself and leveraging weak labels as assistance at the same time, which achieves the state-of-theart performance under various weakly-supervised schemes on both ScanNet-v2 and S3DIS datasets.
Low-confidence Samples Matter for Domain Adaptation
TLDR
This work proposes a novel contrastive learning method by processing low-confidence samples, which encourages the model to make use of the target data structure through the instance discrimination process, and combines cross-domain mixup to augment the proposed contrastive loss.

References

SHOWING 1-10 OF 48 REFERENCES
A DIRT-T Approach to Unsupervised Domain Adaptation
TLDR
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.
Progressive Feature Alignment for Unsupervised Domain Adaptation
TLDR
The Progressive Feature Alignment Network (PFAN) is proposed to align the discriminative features across domains progressively and effectively, via exploiting the intra-class variation in the target domain.
Opposite Structure Learning for Semi-supervised Domain Adaptation
TLDR
A novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA) is proposed, which progressively update the measurement of distance and the feature representation on both domains via an adversarial training paradigm.
MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation
TLDR
This paper proposes an easy-to-implement method dubbed MiniMax Entropy Networks (MMEN), which focuses on learning the categorical information from unlabeled target samples with the help of labeled source samples based on adversarial learning.
Cluster Alignment With a Teacher for Unsupervised Domain Adaptation
TLDR
Cluster Alignment with a Teacher (CAT) is proposed for unsupervised domain adaptation, which can effectively incorporate the discriminative clustering structures in both domains for better adaptation.
MiCo: Mixup Co-Training for Semi-Supervised Domain Adaptation
TLDR
A new approach for SSDA is proposed, which is to explicitly decompose SSDA into two sub-problems: a semi-supervised learning problem in the target domain and an unsupervised domain adaptation (UDA) problem across domains.
Discriminative Adversarial Domain Adaptation
TLDR
Discriminative Adversarial Domain Adaptation has a novel adversarial objective that encourages a mutually inhibitory relation between category and domain predictions for any input instance and it defines a minimax game that can promote the joint distribution alignment.
Partial Adversarial Domain Adaptation
TLDR
This paper presents Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.
Moment Matching for Multi-Source Domain Adaptation
TLDR
A new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M3SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions.
Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation
TLDR
An SSDA framework that aims to align features via alleviation of the intra-domain discrepancy within the target domain is proposed and the incompatibility of the conventional adversarial perturbation methods with SSDA is demonstrated.
...
1
2
3
4
5
...