FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation

@article{Na2021FixBiBD,
  title={FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation},
  author={Jaemin Na and Heechul Jung and HyungJin Chang and Wonjun Hwang},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={1094-1103}
}
Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain… 

Figures and Tables from this paper

Contrastive Vicinal Space for Unsupervised Domain Adaptation
TLDR
This paper proposes an instance-wise minimax strategy that minimizes the entropy of high uncertainty instances in the vicinal space to tackle the problem of the equilibrium collapse of labels in vicinal instances.
Debiased Learning from Naturally Imbalanced Pseudo-Labels for Zero-Shot and Semi-Supervised Learning
TLDR
To eliminate the model bias, this work proposes a simple yet effective method DebiasMatch, comprising of an adaptive debiasing module and an adaptive marginal loss, which significantly outperforms previous state-of-the-arts learning tasks.
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?
Unsupervised domain adaptation, as a prevalent transfer learning setting, spans many real-world applications. With the increasing representational power and applicability of neural networks,
Exploiting Both Domain-specific and Invariant Knowledge via a Win-win Transformer for Unsupervised Domain Adaptation
TLDR
This paper proposes a Win-Win TRansformer framework (WinTR) that separately explores the domain-specific knowledge for each domain and meanwhile interchanges cross-domain knowledge, validating the effectiveness of exploiting both domain- specific and invariant information for both domains in UDA.
Geometry-Aware Unsupervised Domain Adaptation
TLDR
This work proposes a novel geometryaware model to learn the transferability and discriminability simultaneously via nuclear norm optimization and introduces the domain coherence and class orthogonality for UDA from the perspective of subspace geometry.
Gradual Domain Adaptation without Indexed Intermediate Domains
  • Hong-You Chen
  • 2021
The effectiveness of unsupervised domain adaptation degrades when there is a large discrepancy between the source and target domains. Gradual domain adaption (GDA) is one promising way to mitigate
IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID
TLDR
This work argues that the bridging between the source and target domains can be utilized to tackle the UDA re-ID task, and proposes an Intermediate Domain Module (IDM) to generate intermediate domains’ representations on-the-fly by mixing the sources and targets’ hidden representations using two domain factors.
Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks
TLDR
Several new regularizers for controlling the domain gap are proposed to optimize the weights of the pre-trained StyleGAN generator so that it will output images in domain B instead of domain A and show significant visual improvements over the state of the art.
More is Better: A Novel Multi-view Framework for Domain Generalization
  • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
  • Computer Science
    ArXiv
  • 2021
TLDR
This paper investigates that overfitting not only causes the inferior generalization ability to unseen target domains but also leads unstable prediction in the test stage, and proposes a novel multiview DG framework that outperforms several state-of-the-art approaches.
Probability Contrastive Learning for Domain Adaptation
TLDR
A novel probability contrastive learning (PCL) is proposed, which not only produces compact features but also enforces them to be distributed around the class weights learned from source data.
...
1
2
...

References

SHOWING 1-10 OF 48 REFERENCES
Contrastive Adaptation Network for Unsupervised Domain Adaptation
TLDR
This paper proposes Contrastive Adaptation Network optimizing a new metric which explicitly models the intra- class domain discrepancy and the inter-class domain discrepancy, and designs an alternating update strategy for training CAN in an end-to-end manner.
Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering
  • Hui Tang, Ke Chen, K. Jia
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This work describes the proposed method as Structurally Regularized Deep Clustering (SRDC), where it enhances target discrimination with clustering of intermediate network features, and enhance structural regularization with soft selection of less divergent source examples.
Model Adaptation: Unsupervised Domain Adaptation Without Source Data
TLDR
This paper proposes a new framework, which is referred to as collaborative class conditional generative adversarial net, to bypass the dependence on the source data and achieves superior performance on multiple adaptation tasks with only unlabeled target data, which verifies its effectiveness in this challenging setting.
Fast Generalized Distillation for Semi-Supervised Domain Adaptation
TLDR
It is shown that without accessing the source data, GDSDA can effectively utilize the unlabeled data to transfer the knowledge from the source models to efficiently solve the SDA problem.
Unsupervised Domain Adaptation With Hierarchical Gradient Synchronization
TLDR
This work proposes a novel method called Hierarchical Gradient Synchronization to model the synchronization relationship among the local distribution pieces and global distribution, aiming for more precise domain-invariant features.
Virtual Mixup Training for Unsupervised Domain Adaptation
TLDR
A new regularization method called Virtual Mixup Training (VMT), which is able to incorporate the locally-Lipschitz constraint to the areas in-between training data, and can be combined with most existing models such as the recent state-of-the-art model called VADA.
MiCo: Mixup Co-Training for Semi-Supervised Domain Adaptation
TLDR
A new approach for SSDA is proposed, which is to explicitly decompose SSDA into two sub-problems: a semi-supervised learning problem in the target domain and an unsupervised domain adaptation (UDA) problem across domains.
Learning Semantic Representations for Unsupervised Domain Adaptation
TLDR
Moving semantic transfer network is presented, which learn semantic representations for unlabeled target samples by aligning labeled source centroid and pseudo-labeled target centroid, resulting in an improved target classification accuracy.
Opposite Structure Learning for Semi-supervised Domain Adaptation
TLDR
A novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA) is proposed, which progressively update the measurement of distance and the feature representation on both domains via an adversarial training paradigm.
Semi-Supervised Domain Adaptation via Minimax Entropy
TLDR
A novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model for semi-supervised domain adaptation (SSDA) setting, setting a new state of the art for SSDA.
...
1
2
3
4
5
...