FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation

@article{Na2021FixBiBD,
  title={FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation},
  author={Jaemin Na and Heechul Jung and HyungJin Chang and Wonjun Hwang},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={1094-1103}
}
Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain… Expand

Figures and Tables from this paper

Contrastive Vicinal Space for Unsupervised Domain Adaptation
  • Jaemin Na, Dongyoon Han, Hyung Jin Chang, Wonjun Hwang
  • Computer Science
  • 2021
Utilizing vicinal space between the source and target domains is one of the recent unsupervised domain adaptation approaches. However, the problem of the equilibrium collapse of labels, where theExpand
Exploiting Both Domain-specific and Invariant Knowledge via a Win-win Transformer for Unsupervised Domain Adaptation
  • Wenxuan Ma, Jinming Zhang, Shuang Li, Chi Harold Liu, Yulin Wang, Wei Li
  • Computer Science
  • 2021
Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain. Most existing UDA approaches enable knowledge transfer via learningExpand
IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID
TLDR
This work argues that the bridging between the source and target domains can be utilized to tackle the UDA re-ID task, and proposes an Intermediate Domain Module (IDM) to generate intermediate domains’ representations on-the-fly by mixing the sources and targets’ hidden representations using two domain factors. Expand
Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks
TLDR
Several new regularizers for controlling the domain gap are proposed to optimize the weights of the pre-trained StyleGAN generator so that it will output images in domain B instead of domain A and show significant visual improvements over the state of the art. Expand
Probability Contrastive Learning for Domain Adaptation
  • Junjie Li, Yixin Zhang, Zilei Wang, Keyu Tu
  • Computer Science
  • ArXiv
  • 2021
Recent feature contrastive learning (FCL) has shown promising performance in self-supervised representation learning. For domain adaptation, however, FCL cannot show overwhelming gains since theExpand
Reducing the Covariate Shift by Mirror Samples in Cross Domain Alignment
  • Yin Zhao, Minquan Wang, Longjun Cai
  • Computer Science
  • ArXiv
  • 2021
TLDR
A novel concept named (virtual) mirror is introduced, which represents the equivalent sample in another domain, which aligns the mirror pairs cross domains, and a mirror loss is constructed to enhance the alignment of the domains. Expand
Semantic-aware Representation Learning Via Probability Contrastive Loss
TLDR
A novel probability contrastive learning (PCL) is proposed, which not only produces rich features but also enforces them to be distributed around the class prototypes to exploit the class semantics during optimization. Expand
Survey: Image Mixing and Deleting for Data Augmentation
TLDR
This paper empirically evaluates these approaches for image classification, fine-grained image recognition, and object detection where it is shown that this category of data augmentation improves the overall performance for deep neural networks. Expand
To miss-attend is to misalign! Residual Self-Attentive Feature Alignment for Adapting Object Detectors
Advancements in adaptive object detection can lead to tremendous improvements in applications like autonomous navigation, as they alleviate the distributional shifts along the detection pipeline.Expand

References

SHOWING 1-10 OF 48 REFERENCES
Contrastive Adaptation Network for Unsupervised Domain Adaptation
TLDR
This paper proposes Contrastive Adaptation Network optimizing a new metric which explicitly models the intra- class domain discrepancy and the inter-class domain discrepancy, and designs an alternating update strategy for training CAN in an end-to-end manner. Expand
Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering
  • Hui Tang, Ke Chen, K. Jia
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This work describes the proposed method as Structurally Regularized Deep Clustering (SRDC), where it enhances target discrimination with clustering of intermediate network features, and enhance structural regularization with soft selection of less divergent source examples. Expand
Model Adaptation: Unsupervised Domain Adaptation Without Source Data
TLDR
This paper proposes a new framework, which is referred to as collaborative class conditional generative adversarial net, to bypass the dependence on the source data and achieves superior performance on multiple adaptation tasks with only unlabeled target data, which verifies its effectiveness in this challenging setting. Expand
Fast Generalized Distillation for Semi-Supervised Domain Adaptation
TLDR
It is shown that without accessing the source data, GDSDA can effectively utilize the unlabeled data to transfer the knowledge from the source models to efficiently solve the SDA problem. Expand
Unsupervised Domain Adaptation With Hierarchical Gradient Synchronization
TLDR
This work proposes a novel method called Hierarchical Gradient Synchronization to model the synchronization relationship among the local distribution pieces and global distribution, aiming for more precise domain-invariant features. Expand
Virtual Mixup Training for Unsupervised Domain Adaptation
TLDR
A new regularization method called Virtual Mixup Training (VMT), which is able to incorporate the locally-Lipschitz constraint to the areas in-between training data, and can be combined with most existing models such as the recent state-of-the-art model called VADA. Expand
MiCo: Mixup Co-Training for Semi-Supervised Domain Adaptation
TLDR
A new approach for SSDA is proposed, which is to explicitly decompose SSDA into two sub-problems: a semi-supervised learning problem in the target domain and an unsupervised domain adaptation (UDA) problem across domains. Expand
Learning Semantic Representations for Unsupervised Domain Adaptation
TLDR
Moving semantic transfer network is presented, which learn semantic representations for unlabeled target samples by aligning labeled source centroid and pseudo-labeled target centroid, resulting in an improved target classification accuracy. Expand
Opposite Structure Learning for Semi-supervised Domain Adaptation
TLDR
A novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA) is proposed, which progressively update the measurement of distance and the feature representation on both domains via an adversarial training paradigm. Expand
Semi-Supervised Domain Adaptation via Minimax Entropy
TLDR
A novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model for semi-supervised domain adaptation (SSDA) setting, setting a new state of the art for SSDA. Expand
...
1
2
3
4
5
...