• Corpus ID: 208910752

Less Confusion More Transferable: Minimum Class Confusion for Versatile Domain Adaptation

  title={Less Confusion More Transferable: Minimum Class Confusion for Versatile Domain Adaptation},
  author={Ying Jin and Ximei Wang and Mingsheng Long and Jianmin Wang},
Domain Adaptation (DA) transfers a learning model from a labeled source domain to an unlabeled target domain which follows different distributions. There are a variety of DA scenarios subject to label sets and domain configurations, including closed-set and partial-set DA, as well as multisource and multi-target DA. It is notable that existing DA methods are generally designed only for a specific scenario, and may underperform for scenarios they are not tailored to. Towards a versatile DA… 

Learning transferable and discriminative features for unsupervised domain adaptation

A novel method called learning TransFerable and Discriminative Features for unsupervised domain adaptation (TFDF) is proposed to optimize these two objectives simultaneously and integrate them into the Structural Risk Minimization (RSM) framework and learn a domain-invariant classifier.

Unsupervised domain adaptation with exploring more statistics and discriminative information

This paper adopts the recently proposed statistic called MMCD to measure domain discrepancy and proposes to learn more discriminative features to avoid class confusion, where the inner of the classifier predictions with their transposes are used to reflect the confusion relationship between different classes.

Cross-Domain Palmprint Recognition via Regularized Adversarial Domain Adaptive Hashing

A novel Regularized Adversarial Domain Adaptative Hashing method (R-ADAH) for cross-domain palmprint recognition based on Deep Hashing Network (DHN) is proposed, which shows a promising increase of the recognition performance.



Learning to Transfer Examples for Partial Domain Adaptation

A unified approach to PDA is proposed, Example Transfer Network (ETN), which jointly learns domain-invariant representations across domains and a progressive weighting scheme to quantify the transferability of source examples.

Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation

It is demonstrated that progressively adapting the feature norms of the two domains to a large range of values can result in significant transfer gains, implying that those task-specific features with larger norms are more transferable.

A DIRT-T Approach to Unsupervised Domain Adaptation

Two novel and related models are proposed: the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption, and the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) models, which takes the VADA model as initialization and employs natural gradient steps to further minimize the Cluster assumption violation.

Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

A new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries by maximizing the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source.

Contrastive Adaptation Network for Unsupervised Domain Adaptation

This paper proposes Contrastive Adaptation Network optimizing a new metric which explicitly models the intra- class domain discrepancy and the inter-class domain discrepancy, and designs an alternating update strategy for training CAN in an end-to-end manner.

Moment Matching for Multi-Source Domain Adaptation

A new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M3SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions.

Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers

Transferable Adversarial Training (TAT) is proposed to enable the adaptation of deep classifiers and advances the state of the arts on a variety of domain adaptation tasks in vision and NLP, including object recognition, learning from synthetic to real data, and sentiment classification.

Domain-Symmetric Networks for Adversarial Domain Adaptation

This paper proposes a new domain adaptation method called Domain-Symmetric Networks (SymNets), which is based on a symmetric design of source and target task classifiers, based on which an additional classifier is constructed that shares with them its layer neurons.

Partial Adversarial Domain Adaptation

This paper presents Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.

Domain-Adversarial Training of Neural Networks

A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.