Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

@article{Saito2018MaximumCD,
  title={Maximum Classifier Discrepancy for Unsupervised Domain Adaptation},
  author={Kuniaki Saito and Kohei Watanabe and Y. Ushiku and Tatsuya Harada},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={3723-3732}
}
In this work, we present a method for unsupervised domain adaptation. [...] Key Method A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https://github.com/mil-tokyo/MCD_DAExpand
Unsupervised Domain Adaptation: An Adaptive Feature Norm Approach
TLDR
This paper empirically reveal that the erratic discrimination of target domain mainly reflects in its much lower feature norm value with respect to that of the source domain, and demonstrates that adapting feature norms of source and target domains to achieve equilibrium over a large range of values can result in significant domain transfer gains. Expand
Minimizing Outputs’ Differences of Classifiers with Different Responsibilities for Unsupervised Adversarial Domain Adaptation
  • Weiran Zhang, Xi Chen, Wei Li
  • 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP)
  • 2021
The adversarial domain adaptation has made many achievements in the field of Unsupervised Domain Adaptation (UDA). However, initial adversarial domain adaptation methods do not consider theExpand
Opposite Structure Learning for Semi-supervised Domain Adaptation
TLDR
A novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA) is proposed, which progressively update the measurement of distance and the feature representation on both domains via an adversarial training paradigm. Expand
Unsupervised Domain Adaptation via Discriminative Classes-Center Feature Learning in Adversarial Network
TLDR
A novel approach for the task of unsupervised domain adaptation via discriminative classes-center feature learning in adversarial network (C 2 FAN), which concentrates on learning domain-invariant representation and paying close attention to classification decision boundary simultaneously to improve the ability of transferable knowledge across different domains. Expand
Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation
TLDR
A novel Adversarial Dual Distinct Classifiers Network (AD2CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries with the guidance of discriminative cross-domain alignment is proposed. Expand
Class Consistency Driven Unsupervised Deep Adversarial Domain Adaptation
TLDR
This work proposes a novel adversarial approach which judiciously refines the space learned by the domain classifier by incorporating class level information and finds that the proposed model is capable of producing a compact and better domain aligned feature space. Expand
Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain Adaptation
TLDR
A cross-domain gradient discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples so it can be used as a good supervision to improve the accuracy of target samples. Expand
Pairwise Similarity Regularization for Adversarial Domain Adaptation
TLDR
A Pairwise Similarity Regularization (PSR) approach that exploits cluster structures of the target domain data and minimizes the divergence between the pairwise similarity of clustering partition and that of pseudo predictions to eliminate the negative effect of unreliable pseudo labels. Expand
Multiple Classifiers Based Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
TLDR
This paper employs the principle that the classifiers are different from each other to construct a discrepancy loss function for multiple classifiers, and demonstrates that, on average, adopting the structure of three classifiers normally yields the best performance as a tradeoff between the accuracy and efficiency. Expand
Bi-Classifier Determinacy Maximization for Unsupervised Domain Adaptation
TLDR
This paper designs a novel classifier determinacy disparity (CDD) metric, which formulates classifier discrepancy as the class relevance of distinct target predictions and implicitly introduces constraint on the target feature discriminability. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 47 REFERENCES
Analysis of Representations for Domain Adaptation
TLDR
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set. Expand
Unsupervised Domain Adaptation with Residual Transfer Networks
TLDR
Empirical evidence shows that the new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeledData in the target domain outperforms state of the art methods on standard domain adaptation benchmarks. Expand
A theory of learning from different domains
TLDR
A classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains and shows how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. Expand
Asymmetric Tri-training for Unsupervised Domain Adaptation
TLDR
This work proposes the use of an asymmetric tri-training method for unsupervised domain adaptation, where two networks are used to label unlabeled target samples, and one network is trained by the pseudo-labeled samples to obtain target-discriminative representations. Expand
Learning Transferrable Representations for Unsupervised Domain Adaptation
TLDR
A unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation is proposed. Expand
Associative Domain Adaptation
We propose associative domain adaptation, a novel technique for end-to-end domain adaptation with neural networks, the task of inferring class labels for an unlabeled target domain based on theExpand
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
TLDR
This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Expand
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. Expand
Return of Frustratingly Easy Domain Adaptation
TLDR
This work proposes a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL), which minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Expand
Transfer learning from multiple source domains via consensus regularization
TLDR
This work proposes a consensus regularization framework for transfer learning from multiple source domains to a target domain, in which a local classifier is trained by considering both local data available in a source domain and the prediction consensus with the classifiers from other source domains. Expand
...
1
2
3
4
5
...