• Corpus ID: 221293232

Multiple-Source Adaptation with Domain Classifiers

@article{Cortes2020MultipleSourceAW,
  title={Multiple-Source Adaptation with Domain Classifiers},
  author={Corinna Cortes and Mehryar Mohri and Ananda Theertha Suresh and Ningshan Zhang},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.11036}
}
We consider the multiple-source adaptation (MSA) problem and improve a previously proposed MSA solution, where accurate density estimation per domain is required to obtain favorable learning guarantees. In this work, we replace the difficult task of density estimation per domain with a much easier task of domain classification, and show that the two solutions are equivalent given the true densities and domain classifier, yet the newer approach benefits from more favorable guarantees when… 
1 Citations

Tables from this paper

Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.

References

SHOWING 1-10 OF 64 REFERENCES
Domain adaptation from multiple sources via auxiliary classifiers
TLDR
A new data-dependent regularizer based on smoothness assumption into Least-Squares SVM (LS-SVM), which enforces that the target classifier shares similar decision values with the auxiliary classifiers from relevant source domains on the unlabeled patterns of the target domain.
Multiple source domain adaptation: A sharper bound using weighted Rademacher complexity
TLDR
A novelty complexity (weighted Rademacher complexity) is introduced to restrict the complexity of a hypothesis class in multiple source domain adaptation and its self bounding properties are explored and new generalization bounds for multiple sourcedomain adaptation are given.
Discovering Latent Domains for Multisource Domain Adaptation
TLDR
This paper presents both a novel domain transform mixture model which outperforms a single transform model when multiple domains are present, and a novel constrained clustering method that successfully discovers latent domains.
Moment Matching for Multi-Source Domain Adaptation
TLDR
A new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M3SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions.
Adversarial Multiple Source Domain Adaptation
TLDR
This paper proposes multisource domain adversarial networks (MDAN) that approach domain adaptation by optimizing task-adaptive generalization bounds and conducts extensive experiments showing superior adaptation performance on both classification and regression problems: sentiment analysis, digit classification, and vehicle counting.
Algorithms and Theory for Multiple-Source Adaptation
TLDR
The theory, algorithms, and empirical results provide a full solution for the multiple-source adaptation problem with very practical benefits and derive new normalized solutions with strong theoretical guarantees for the cross-entropy loss and other similar losses.
A Two-Stage Weighting Framework for Multi-Source Domain Adaptation
TLDR
A two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences as well as conditional probability differences with the target domain data, using the weighted Rademacher complexity measure is proposed.
Domain Adaptation From Multiple Sources: A Domain-Dependent Regularization Approach
TLDR
A new framework called domain adaptation machine (DAM) is proposed for the multiple source domain adaption problem and a new domain-dependent regularizer based on smoothness assumption is proposed, which enforces that the target classifier shares similar decision values with the relevant base classifiers on the unlabeled instances from the target domain.
Analysis of Representations for Domain Adaptation
TLDR
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.
Robust visual domain adaptation with low-rank reconstruction
TLDR
This paper transforms the visual samples in the source domain into an intermediate representation such that each transformed source sample can be linearly reconstructed by the samples of the target domain, making it more robust than previous methods.
...
1
2
3
4
5
...