Combating Label Distribution Shift for Active Domain Adaptation

@article{Hwang2022CombatingLD,
  title={Combating Label Distribution Shift for Active Domain Adaptation},
  author={Se Myung Hwang and Sohyun Lee and Sungyeon Kim and Jungseul Ok and Suha Kwak},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.06604}
}
. We consider the problem of active domain adaptation (ADA) to unlabeled target data, of which subset is actively selected and labeled given a budget constraint. Inspired by recent analysis on a critical issue from label distribution mismatch between source and target in domain adaptation, we devise a method that addresses the issue for the first time in ADA. At its heart lies a novel sampling strategy, which seeks target data that best approximate the entire target distribution as well as… 

Figures and Tables from this paper

Local Context-Aware Active Domain Adaptation

A Local context-aware ADA framework, named LADA, is proposed, to address the issue of local inconsistency of model predictions and chooses more informative target samples than existing active selection strategies.

References

SHOWING 1-10 OF 65 REFERENCES

Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings

Clustering Uncertainty-weighted Embeddings (CLUE) is proposed, a novel label acquisition strategy for Active DA that performs uncertainty- Weighted clustering to identify target instances for labeling that are both uncertain under the model and diverse in feature space.

Active Adversarial Domain Adaptation

This work shows that the two views of adversarial domain alignment and importance sampling can be unified in one framework for domain adaptation and transfer learning when the source domain has many labeled examples while the target domain does not.

Transferable Query Selection for Active Domain Adaptation

Transferable Query Selection (TQS) is proposed, which selects the most informative samples under domain shift by an ensemble of three new criteria: transferable committee, transferable uncertainty, and transferable domainness.

Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation

To achieve both inter-domain and intra-domain adaptation, an adversarial adaptive clustering loss is introduced to group features of unlabeled target data into clusters and perform cluster-wise feature alignment across the source and target domains.

S3VAADA: Submodular Subset Selection for Virtual Adversarial Active Domain Adaptation

This work proposes S3VAADA which introduces a novel submodular criterion to select a maximally informative subset to label and enhances a cluster-based DA procedure through novel improvements to effectively utilize all the available data for improving generalization on target.

Class-Imbalanced Domain Adaptation: An Empirical Odyssey

This work constructed the first benchmark with 22 cross-domain tasks from 6real-image datasets, and proposed a feature and label distribution CO-ALignment (COAL) model with a novel combination of existing ideas that is empirically shown to outperform the most recent domain adaptation methods on benchmarks.

Learning Distinctive Margin toward Active Domain Adaptation

  • Ming XieYuxi Li Pei Wang
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
This work proposes a concise but effective ADA method called Select-by-Distinctive-Margin (SDM), which consists of a maximum margin loss and a margin sampling algorithm for data selection and provides theoretical analysis to show that SDM works like a Support Vector Machine.

Active Learning for Domain Adaptation: An Energy-based Approach

This paper presents a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation, which surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world.

A DIRT-T Approach to Unsupervised Domain Adaptation

Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.

Domain Adaptation meets Active Learning

An algorithm is presented that harnesses the source domain data to learn the best possible initializer hypothesis for doing active learning in the target domain, resulting in improved label complexity.
...