Optimal Transport for Domain Adaptation

@article{Courty2017OptimalTF,
  title={Optimal Transport for Domain Adaptation},
  author={Nicolas Courty and R{\'e}mi Flamary and Devis Tuia and Alain Rakotomamonjy},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2017},
  volume={39},
  pages={1853-1865}
}
Domain adaptation is one of the most challenging tasks of modern data analytics. [] Key Method We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of…

Figures and Tables from this paper

DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation
TLDR
Through a measure of discrepancy on joint deep representations/labels based on optimal transport, this work not only learns new data representations aligned between the source and target domain, but also simultaneously preserve the discriminative information used by the classifier.
Metric Learning in Optimal Transport for Domain Adaptation
TLDR
This paper derives a generalization bound on the target error involving several Wasserstein distances and designs an algorithm (MLOT) which optimizes a Mahalanobis distance leading to a transportation plan that adapts better.
Metric Learning in Optimal Transport for Domain Adaptation
TLDR
This paper derives a generalization bound on the target error involving several Wassertein distances and designs an algorithm (MLOT) which optimizes a Mahalanobis distance leading to a transportation plan that adapts better.
Unsupervised Domain Adaptation
TLDR
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data, by learning to perform auxiliary self-supervised task on both domains simultaneously.
Theoretical Analysis of Domain Adaptation with Optimal Transport
TLDR
A theoretical study on the advantages that concepts borrowed from optimal transportation theory can bring to multi-source domain adaptation is provided and the Wasserstein metric can be used as a divergence measure between distributions to obtain generalization guarantees for three different learning settings.
Enhanced Transport Distance for Unsupervised Domain Adaptation
TLDR
This work proposes an enhanced transport distance (ETD) for UDA, which builds an attention-aware transport distance, which can be viewed as the prediction feedback of the iteratively learned classifier, to measure the domain discrepancy.
Target to Source Coordinate-Wise Adaptation of Pre-trained Models
TLDR
This paper introduces an original scenario where the former trained source model is directly reused on target data, requiring only finding a transformation from the target domain to the source domain, using a greedy coordinate-wise transformation leveraging on optimal transport.
Transfer metric learning for unsupervised domain adaptation
TLDR
A transfer metric learning method which decreases intra- class distance and increases inter-class distance simultaneously simultaneously even in the case of target data are unlabelled, so the model could be more robust for target data.
Optimal Transport for Multi-source Domain Adaptation under Target Shift
TLDR
This paper proposes a method based on optimal transport that performs multi-source adaptation and target shift correction simultaneously by learning the class probabilities of the unlabeled target sample and the coupling allowing to align two (or more) probability distributions.
Unsupervised Domain Adaptation through Self-Supervision
TLDR
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data, by learning to perform auxiliary self-supervised task on both domains simultaneously.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 72 REFERENCES
Analysis of Representations for Domain Adaptation
TLDR
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.
Domain Adaptation via Transfer Component Analysis
TLDR
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.
Domain adaptation for object recognition: An unsupervised approach
TLDR
This paper presents one of the first studies on unsupervised domain adaptation in the context of object recognition, where data has been labeled only from the source domain (and therefore do not have correspondences between object categories across domains).
What you saw is not what you get: Domain adaptation using asymmetric kernel transforms
TLDR
This paper introduces ARC-t, a flexible model for supervised learning of non-linear transformations between domains, based on a novel theoretical result demonstrating that such transformations can be learned in kernel space.
Geodesic flow kernel for unsupervised domain adaptation
TLDR
This paper proposes a new kernel-based method that takes advantage of low-dimensional structures that are intrinsic to many vision datasets, and introduces a metric that reliably measures the adaptability between a pair of source and target domains.
Kernel Manifold Alignment for Domain Adaptation
TLDR
This paper introduces a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains, and presents a reduced-rank version of KEMA for computational efficiency.
Impossibility Theorems for Domain Adaptation
The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning
Robust visual domain adaptation with low-rank reconstruction
TLDR
This paper transforms the visual samples in the source domain into an intermediate representation such that each transformed source sample can be linearly reconstructed by the samples of the target domain, making it more robust than previous methods.
Domain Adaptation with Regularized Optimal Transport
TLDR
A new optimal transport algorithm is proposed that incorporates label information in the optimization: this is achieved by combining an efficient matrix scaling technique together with a majoration of a non-convex regularization term.
Domain Adaptation: Learning Bounds and Algorithms
TLDR
A novel distance between distributions, discrepancy distance, is introduced that is tailored to adaptation problems with arbitrary loss functions, and Rademacher complexity bounds are given for estimating the discrepancy distance from finite samples for different loss functions.
...
1
2
3
4
5
...