• Corpus ID: 67876975

Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment

@inproceedings{Wu2019DomainAW,
  title={Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment},
  author={Yifan Wu and Ezra Winston and Divyansh Kaushik and Zachary Chase Lipton},
  booktitle={ICML},
  year={2019}
}
Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, often motivating this approach as minimizing two (of three) terms in a theoretical bound on target error… 

Figures and Tables from this paper

Mapping conditional distributions for domain adaptation under generalized target shift

TLDR
A novel and general approach to align pretrained representations, which circumvents existing drawbacks and learns an optimal transport map, implemented as a NN, which maps source representations onto target ones.

Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation

TLDR
This work proposes a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels, and reveals the existence of a domain-discriminator shortcut in misaligned classes, which is addressed by the proposed implicit aligned approach to facilitate domain-adversarial learning.

FRuDA: Framework for Distributed Adversarial Domain Adaptation

TLDR
Evaluation of FRuDA with five image and speech datasets show that it can boost target domain accuracy by up to 50% and improve the training efficiency of adversarial uDA by at least 50%.

Pairwise Adversarial Training for Unsupervised Class-imbalanced Domain Adaptation

TLDR
The pairwise adversarial training (PAT) is a novel data-augmentation method which can be integrated into existing UDA models to tackle with the CDA problem and achieves considerable improvements on benchmarks compared with the original models as well as the state-of-the-art CDA methods.

Adversarial Unsupervised Domain Adaptation with Conditional and Label Shift: Infer, Align and Iterate

TLDR
An adversarial unsupervised domain adaptation (UDA) method under inherent conditional and label shifts, in which the marginal p(y) and align p(x|y) iteratively at the training stage, and precisely align the posterior p( y|x) at the testing stage is proposed.

An Adversarial Perturbation Oriented Domain Adaptation Approach for Semantic Segmentation

TLDR
This work proposes to explicitly train a domain-invariant classifier by generating and defensing against pointwise feature space adversarial perturbations and achieves the state-of-the-art performance on two challenging domain adaptation tasks for semantic segmentation.

f-Domain-Adversarial Learning: Theory and Algorithms

TLDR
A novel generalization bound for domain adaptation is derived that exploits a new measure of discrepancy between distributions based on a variational characterization of f -divergences and derives a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016).

DIRL: Domain-Invariant Representation Learning for Sim-to-Real Transfer

TLDR
A domain-invariant representation learning (DIRL) algorithm to adapt deep models to the physical environment with a small amount of real data and combines it with a triplet distribution loss to make the conditional distributions disjoint in the shared feature space.

Robust Local Preserving and Global Aligning Network for Adversarial Domain Adaptation

TLDR
This paper addresses the problem that learning UDA models only with access to noisy labels is addressed and proposes a novel method called robust local preserving and global aligning network (RLPGA), which improves the robustness of the label noise from two aspects.

Adversarial Support Alignment

TLDR
This work proposes symmetric support difference as a divergence measure to quantify the mismatch between supports and shows that select discrimi-nators are able to map support differences as support differences in their one-dimensional output space.
...

References

SHOWING 1-10 OF 30 REFERENCES

Partial Adversarial Domain Adaptation

TLDR
This paper presents Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.

A DIRT-T Approach to Unsupervised Domain Adaptation

TLDR
Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.

CyCADA: Cycle-Consistent Adversarial Domain Adaptation

TLDR
A novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model that adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs is proposed.

Partial Transfer Learning with Selective Adversarial Networks

TLDR
Selective Adversarial Network (SAN) is presented, which simultaneously circumvents negative transfer by selecting out the outlier source classes and promotes positive transfer by maximally matching the data distributions in the shared label space.

Adversarial Discriminative Domain Adaptation

TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.

Domain-Adversarial Training of Neural Networks

TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.

Improved Training of Wasserstein GANs

TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Domain Adaptation under Target and Conditional Shift

TLDR
This work considers domain adaptation under three possible scenarios, kernel embedding of conditional as well as marginal distributions, and proposes to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain.

Impossibility Theorems for Domain Adaptation

The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning

Analysis of Representations for Domain Adaptation

TLDR
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.