Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings

@article{Prabhu2021ActiveDA,
  title={Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings},
  author={Viraj Prabhu and Arjun Chandrasekaran and Kate Saenko and Judy Hoffman},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={8485-8494}
}
Generalizing deep neural networks to new target domains is critical to their real-world utility. In practice, it may be feasible to get some target data labeled, but to be cost-effective it is desirable to select a maximally-informative subset via active learning (AL). We study the problem of AL under a domain shift, called Active Domain Adaptation (Active DA). We demonstrate how existing AL approaches based solely on model uncertainty or diversity sampling are less effective for Active DA. We… 
Active Learning for Domain Adaptation: An Energy-based Approach
TLDR
This paper presents a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation that surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world.
Learning Distinctive Margin toward Active Domain Adaptation
TLDR
This work proposes a concise but effective ADA method called Select-by-Distinctive-Margin (SDM), which consists of a maximum margin loss and a margin sampling algorithm for data selection and provides theoretical analysis to show that SDM works like a Support Vector Machine.
Loss-based Sequential Learning for Active Domain Adaptation
Active domain adaptation (ADA) studies have mainly addressed query selection while following existing domain adaptation strategies. However, we argue that it is critical to consider not only query
ADeADA: Adaptive Density-aware Active Domain Adaptation for Semantic Segmentation
TLDR
ADeADA is presented, a general active domain adaptation framework for semantic segmentation and an adaptive budget allocation policy is designed, which dynamically balances the labeling budgets among different categories as well as between density-aware and uncertainty-based methods.
Active Source Free Domain Adaptation
Source free domain adaptation (SFDA) aims to transfer a trained source model to the unlabeled target domain without accessing the source data. However, the SFDA setting faces an effect bottleneck due
D2ADA: Dynamic Density-aware Active Domain Adaptation for Semantic Segmentation
TLDR
D 2 ADA is presented, a general active domain adaptation framework for semantic segmentation and a dynamic scheduling policy is designed to adjust the labeling budgets between domain exploration and model uncertainty over time to facilitate labeling efficiency.
Online Continual Adaptation with Active Self-Training
TLDR
This paper presents a parallel version of the Tsinghua-Berkeley Shenzhen Institute’s parallel reinforcement learning model, which is based on a model derived from the model developed at the University of California, Berkeley.
Cost-effective Framework for Gradual Domain Adaptation with Multifidelity
TLDR
A framework that combines multifidelity and active domain adaptation, which is evaluated by experiments with both artificial and real-world datasets to solve the trade-off between cost and accuracy.
Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection
TLDR
This work proposes a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors that uses class prototypes to mitigate the effect pseudo-label noise and demonstrates its effectiveness on two recent object detectors.
Burn After Reading: Online Adaptation for Cross-domain Streaming Data
TLDR
This paper proposes an online framework that “burns after reading”, i.e. each online sample is immediately deleted after it is processed, and proposes a novel algorithm that aims at the most fundamental challenge of the online adaptation setting–the lack of diverse source-target data pairs.
...
1
2
...

References

SHOWING 1-10 OF 68 REFERENCES
Active Adversarial Domain Adaptation
TLDR
This work shows that the two views of adversarial domain alignment and importance sampling can be unified in one framework for domain adaptation and transfer learning when the source domain has many labeled examples while the target domain does not.
Semi-Supervised Domain Adaptation via Minimax Entropy
TLDR
A novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model for semi-supervised domain adaptation (SSDA) setting, setting a new state of the art for SSDA.
A DIRT-T Approach to Unsupervised Domain Adaptation
TLDR
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.
Adversarial Active Learning for Deep Networks: a Margin Based Approach
TLDR
It is demonstrated empirically that adversarial active queries yield faster convergence of CNNs trained on MNIST, the Shoe-Bag and the Quick-Draw datasets.
Semi-supervised Domain Adaptation with Subspace Learning for visual recognition
TLDR
A novel domain adaptation framework, named Semi-supervised Domain Adaptation with Subspace Learning (SDASL), which jointly explores invariant low-dimensional structures across domains to correct data distribution mismatch and leverages available unlabeled target examples to exploit the underlying intrinsic information in the target domain.
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
A new active labeling method for deep learning
  • Dan Wang, Yi Shang
  • Computer Science
    2014 International Joint Conference on Neural Networks (IJCNN)
  • 2014
TLDR
A new active labeling method, AL-DL, for cost-effective selection of data to be labeled, which outperforms random labeling consistently and is applied to deep learning networks based on stacked restricted Boltzmann machines, as well as stacked autoencoders.
Joint Transfer and Batch-mode Active Learning
TLDR
This work presents an integrated framework that performs transfer and active learning simultaneously by solving a single convex optimization problem by minimizing a common objective of reducing distribution difference between the data set consisting of re-weighted source and the queried target domain data and the set of unlabeled targetdomain data.
Learning Transferable Features with Deep Adaptation Networks
TLDR
A new Deep Adaptation Network (DAN) architecture is proposed, which generalizes deep convolutional neural network to the domain adaptation scenario and can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding.
...
1
2
3
4
5
...