• Corpus ID: 233210463

On Universal Black-Box Domain Adaptation

@article{Deng2021OnUB,
  title={On Universal Black-Box Domain Adaptation},
  author={Bin Deng and Yabin Zhang and Hui Tang and Changxing Ding and Kui Jia},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.04665}
}
In this paper, we study an arguably least restrictive setting of domain adaptation in a sense of practical deployment, where only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown. We term such a setting as Universal BlackBox Domain Adaptation (UBDA). The great promise that UBDA makes, however, brings significant learning challenges, since domain adaptation can only rely on the… 

Figures and Tables from this paper

Black-box Probe for Unsupervised Domain Adaptation without Model Transferring

Black-box Probe Domain Adaptation (BPDA), which adopts query mechanism to probe and refine information from source model using third-party dataset, and Distributionally Adversarial Training (DAT) to align the distribution of third- party data with that of target data.

Domain Adaptation without Model Transferring

This paper proposes Domain Adaptation without Source Model, which refines information from source model, and proposes Distributionally Adversarial Training (DAT) to align the distribution of source data with that of target data.

References

SHOWING 1-10 OF 65 REFERENCES

Learning to Detect Open Classes for Universal Domain Adaptation

It is claimed that confidence has high discriminability for extremely confident and uncertain predictions, meaning that the contour line for extremely high and low confidence should be short.

Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data

This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsuper supervised learning and proves that under these assumptions, the minimizers of population objectives based on self- training and input-consistency regularization will achieve high accuracy with respect to ground-truth labels.

Universal Domain Adaptation through Self Supervision

This work proposes a more universally applicable domain adaptation approach that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE), and uses entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy.

Universal Domain Adaptation

This paper introduces Universal Domain Adaptation (UDA), a model that requires no prior knowledge on the label sets that outperforms the state of the art closed set, partial and open set domain adaptation methods in the novel UDA setting.

Moment Matching for Multi-Source Domain Adaptation

A new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M3SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions.

Adapting Visual Category Models to New Domains

This paper introduces a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution.

Universal Source-Free Domain Adaptation

A novel two-stage learning process is proposed with superior DA performance even over state-of-the-art source-dependent approaches, utilizing a novel instance-level weighting mechanism, named as Source Similarity Metric (SSM).

Unsupervised Domain Adaptation of Black-Box Source Models

This work proposes a simple yet effective method, termed Iterative Noisy Label Learning (IterNLL), which starts with getting noisy labels of the unlabeled target data from the black-box source model, and alternates between learning improved target models from the target subset with more reliable labels and updating the noisy target labels.

Hypothesis Disparity Regularized Mutual Information Maximization

HDMI incorporates a hypothesis disparity regularization that coordinates the target hypotheses jointly learn better target representations while preserving more transferable source knowledge with better-calibrated prediction uncertainty.

Label Propagation with Augmented Anchors: A Simple Semi-Supervised Learning baseline for Unsupervised Domain Adaptation

This work suggests a new algorithm of Label Propagation with Augmented Anchors (A$^2$LP), which could potentially improve LP via generation of unlabeled virtual instances (i.e., the augmented anchors) with high-confidence label predictions and tackle the domain-shift challenge of UDA by alternating between pseudo labeling via A$^1$LP and domain-invariant feature learning.
...