Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation

@article{Shen2022ConnectNC,
  title={Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation},
  author={Kendrick Shen and Robbie Jones and Ananya Kumar and Sang Michael Xie and Jeff Z. HaoChen and Tengyu Ma and Percy Liang},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.00570}
}
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain (e.g., photographs ) and unlabeled data from a target domain (e.g., sketches) are used to learn a classifier for the target domain. Conventional UDA methods (e.g., domain adversarial training) learn domain-invariant features to improve generalization to the target domain. In this paper, we show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on… 
Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
TLDR
It is proved that linear transferability can occur when data from the same class in different domains are more related with each other than data from different classes inDifferent domains (e.g., photo dogs and cartoon dogs) are.
Adapting Self-Supervised Vision Transformers by Probing Attention-Conditioned Masking Consistency
Visual domain adaptation (DA) seeks to transfer trained models to unseen, unlabeled domains across distribution shift, but approaches typically focus on adapting convolutional neural network
On the duality between contrastive and non-contrastive self-supervised learning
TLDR
The theoretical and quantitative results suggest that the numerical gaps between contrastive and noncontrastive methods in certain regimes can be significantly reduced given better network design choice and hyperparameter tuning.
Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis
TLDR
The proposed framework exploits the spatiotemporal selfsimilarity of learned multi-scale intra-subject features for pretraining and develops several feature-wise regularizations that avoid collapsed identity representations and proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra- subject correlation.

References

SHOWING 1-10 OF 55 REFERENCES
Cross-domain Contrastive Learning for Unsupervised Domain Adaptation
TLDR
This work builds upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets, and introduces a simple yet effective framework CDCL, for domain alignment.
A DIRT-T Approach to Unsupervised Domain Adaptation
TLDR
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
On Learning Invariant Representation for Domain Adaptation
TLDR
This paper constructs a simple counterexample showing that, contrary to common belief, the above conditions are not sufficient to guarantee successful domain adaptation, and proposes a natural and interpretable generalization upper bound that explicitly takes into account the aforementioned shift.
Enhanced Transport Distance for Unsupervised Domain Adaptation
TLDR
This work proposes an enhanced transport distance (ETD) for UDA, which builds an attention-aware transport distance, which can be viewed as the prediction feedback of the iteratively learned classifier, to measure the domain discrepancy.
Contrastive Adaptation Network for Unsupervised Domain Adaptation
TLDR
This paper proposes Contrastive Adaptation Network optimizing a new metric which explicitly models the intra- class domain discrepancy and the inter-class domain discrepancy, and designs an alternating update strategy for training CAN in an end-to-end manner.
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Maximum Density Divergence for Domain Adaptation
TLDR
This paper proposes a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning and proposes a novel distance loss, named Maximum Density Divergence (MDD), to quantify the distribution divergence.
Unsupervised Domain Adaptation through Self-Supervision
TLDR
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data, by learning to perform auxiliary self-supervised task on both domains simultaneously.
Dynamic Weighted Learning for Unsupervised Domain Adaptation
  • Ning Xiao, Lei Zhang
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
Dynamic Weighted Learning is proposed to avoid the discriminability vanishing problem caused by excessive alignment learning and domain misalignment problem causedby excessive discriminant learning and has an excellent performance in several benchmark datasets.
...
...