• Corpus ID: 235436157

KL Guided Domain Adaptation

@article{Nguyen2021KLGD,
  title={KL Guided Domain Adaptation},
  author={A. Nguyen and Toan Tran and Yarin Gal and Philip H. S. Torr and Atilim Gunecs Baydin},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.07780}
}
Domain adaptation is an important problem and often needed for real-world applications. In this problem, instead of i.i.d. datapoints, we assume that the source (training) data and the target (testing) data have different distributions. With that setting, the empirical risk minimization training procedure often does not perform well, since it does not account for the change in the distribution. A common approach in the domain adaptation literature is to learn a representation of the input that… 

Figures and Tables from this paper

IT-RUDA: Information Theory Assisted Robust Unsupervised Domain Adaptation

Distribution shift between train (source) and test (target) datasets is a common problem encountered in machine learning applications. One approach to resolve this issue is to use the Unsupervised

D2ADA: Dynamic Density-aware Active Domain Adaptation for Semantic Segmentation

D 2 ADA is presented, a general active domain adaptation framework for semantic segmentation and a dynamic scheduling policy is designed to adjust the labeling budgets between domain exploration and model uncertainty over time to facilitate labeling efficiency.

Information-Theoretic Analysis of Unsupervised Domain Adaptation

This paper uses information-theoretic tools to analyze the generalization error in unsupervised domain adaptation (UDA) and presents novel upper bounds for two notions of generalization errors, which are algorithm-dependent and provide insights into algorithm designs.

References

SHOWING 1-10 OF 52 REFERENCES

f-Domain-Adversarial Learning: Theory and Algorithms

A novel generalization bound for domain adaptation is derived that exploits a new measure of discrepancy between distributions based on a variational characterization of f -divergences and derives a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016).

Domain Invariant Representation Learning with Domain Density Transformations

This paper proposes a theoretically grounded method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains, and introduces the use of generative adversarial networks to learn such domain transformations.

d-SNE: Domain Adaptation Using Stochastic Neighborhood Embedding

d-SNE is proposed, a new technique of domain adaptation that cleverly uses stochastic neighborhood embedding techniques and a novel modified-Hausdorff distance that is learnable end-to-end and ideally suited to train neural networks.

Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

A recent upper-bound on the performance of adversarial domain adaptation is extended to multi-class classification and more general discriminators and generalized label shift (GLS) is proposed as a way to improve robustness against mismatched label distributions.

Wasserstein Distance Guided Representation Learning for Domain Adaptation

This paper proposes a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL), which utilizes a neural network to estimate empirical Wassersteins distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstone distance in an adversarial manner.

DIRL: Domain-Invariant Representation Learning for Sim-to-Real Transfer

A domain-invariant representation learning (DIRL) algorithm to adapt deep models to the physical environment with a small amount of real data and combines it with a triplet distribution loss to make the conditional distributions disjoint in the shared feature space.

Bridging Theory and Algorithm for Domain Adaptation

Margin Disparity Discrepancy is introduced, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training.

Regularized Learning for Domain Adaptation under Label Shifts

We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target

Contrastive Adaptation Network for Unsupervised Domain Adaptation

This paper proposes Contrastive Adaptation Network optimizing a new metric which explicitly models the intra- class domain discrepancy and the inter-class domain discrepancy, and designs an alternating update strategy for training CAN in an end-to-end manner.

In Search of Lost Domain Generalization

This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.
...