Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation

@article{Yang2022AttractingAD,
  title={Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation},
  author={Shiqi Yang and Yaxing Wang and Kai Wang and Shangling Jui and Joost van de Weijer},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.04183}
}
We propose a simple but effective source-free domain adaptation (SFDA) method. Treating SFDA as an unsupervised clustering problem and following the intuition that local neighbors in feature space should have more similar predictions than other features, we propose to optimize an objective of prediction consistency. This objective encourages local neighborhood features in feature space to have similar predictions while features farther away in feature space have dissimilar predictions, leading… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 69 REFERENCES
Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation
TLDR
This paper addresses the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data, based on the observation that target data still forms clear clusters.
Universal Domain Adaptation through Self Supervision
TLDR
This work proposes a more universally applicable domain adaptation approach that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE), and uses entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy.
Generalized Source-free Domain Adaptation
TLDR
This paper proposes a new domain adaptation paradigm called Generalized Source-free Domain Adaptation (G-SFDA), where the learned model needs to perform well on both the target and source domains, with only access to current unlabeled target data during adaptation.
Tune it the Right Way: Unsupervised Validation of Domain Adaptation via Soft Neighborhood Density
TLDR
A novel unsupervised validation criterion that measures the density of soft neighborhoods by computing the entropy of the similarity distribution between points, which can tune hyper-parameters and the number of training iterations in both image classification and semantic segmentation models.
Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering
  • Hui Tang, Ke Chen, K. Jia
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This work describes the proposed method as Structurally Regularized Deep Clustering (SRDC), where it enhances target discrimination with clustering of intermediate network features, and enhance structural regularization with soft selection of less divergent source examples.
Model Adaptation: Unsupervised Domain Adaptation Without Source Data
TLDR
This paper proposes a new framework, which is referred to as collaborative class conditional generative adversarial net, to bypass the dependence on the source data and achieves superior performance on multiple adaptation tasks with only unlabeled target data, which verifies its effectiveness in this challenging setting.
Universal Source-Free Domain Adaptation
TLDR
A novel two-stage learning process is proposed with superior DA performance even over state-of-the-art source-dependent approaches, utilizing a novel instance-level weighting mechanism, named as Source Similarity Metric (SSM).
Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation
TLDR
It is demonstrated that progressively adapting the feature norms of the two domains to a large range of values can result in significant transfer gains, implying that those task-specific features with larger norms are more transferable.
A DIRT-T Approach to Unsupervised Domain Adaptation
TLDR
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.
Stochastic Classifiers for Unsupervised Domain Adaptation
TLDR
This paper introduces a novel method called STochastic clAssifieRs (STAR) for addressing the problem of misaligned local regions between source and target domain, which finds that using more classifiers leads to better performance, but also introduces more model parameters, therefore risking overfitting.
...
...