• Corpus ID: 68220930

Regularized Learning for Domain Adaptation under Label Shifts

@article{Azizzadenesheli2019RegularizedLF,
  title={Regularized Learning for Domain Adaptation under Label Shifts},
  author={Kamyar Azizzadenesheli and Anqi Liu and Fanny Yang and Anima Anandkumar},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.09734}
}
We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target domain. We first estimate importance weights using labeled source data and unlabeled target data, and then train a classifier on the weighted source samples. We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only… 

Optimal transport for conditional domain matching and label shift

TLDR
This work theoretically shows that, for good generalization, it is necessary to learn a latent representation in which both marginals and classconditional distributions are aligned across domains that minimizes importance weighted loss in the source domain and a Wasserstein distance between weighted marginals.

Label-Noise Robust Domain Adaptation

TLDR
This paper is the first to comprehensively investigate how label noise could adversely affect existing domain adaptation methods in various scenarios and theoretically prove that there exists a method that can essentially reduce the side-effect of noisy source labels in domain adaptation.

A Review of Domain Adaptation without Target Labels

  • Wouter M. KouwM. Loog
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2021
Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: How can a classifier learn from a source domain and generalize to a

Distributionally Robust Learning for Unsupervised Domain Adaptation

TLDR
A distributionally robust learning method for unsupervised domain adaptation (UDA) that scales to modern computer vision benchmarks, and it is demonstrated that DRST captures shape features more effectively, and reduces the extent of distributional shift during self-training.

Coping with Label Shift via Distributionally Robust Optimisation

TLDR
This paper proposes a model that minimises an objective based on distributionally robust optimisation (DRO), design and analyse a gradient descent-proximal mirror ascent algorithm tailored for large-scale problems to optimise the proposed objective, and establishes its convergence.

Domain Adaptation under Open Set Label Shift

We introduce the problem of domain adaptation under Open Set Label Shift (OSLS) where the label distribution can change arbitrarily and a new class may arrive during deployment, but the

Deep Distributionally Robust Learning for Calibrated Uncertainties under Domain Shift

TLDR
The framework is demonstrated to generate calibrated uncertainties that benefit many downstream tasks, including unsupervised domain adaptation (UDA) and semi-supervised learning (SSL) where methods such as self-training and FixMatch use uncertainties to select confident pseudo-labels.

Adapting to Online Label Shift with Provable Guarantees

TLDR
This paper formulate and investigate the problem of online label shift (OLaS): the learner trains an initial model from the labeled data and then deploys it to an unlabeled online environment where the underlying label distribution changes over time but the label-conditional density does not.

LTF: A Label Transformation Framework for Correcting Target Shift

TLDR
An end-to-end Label Transformation Framework (LTF) for correcting target shift is proposed, which implicitly models the shift of PY and the conditional distribution PX|Y using neural networks and can handle continuous, discrete, and even multidimensional labels in a unified way and is scalable to large data.

A Label Proportions Estimation Technique for Adversarial Domain Adaptation in Text Classification

TLDR
This study focuses on unsupervised domain adaptation of text classification with label shift and introduces a domain adversarial network with label proportions estimation (DAN-LPE) framework.
...

References

SHOWING 1-10 OF 48 REFERENCES

A theory of learning from different domains

TLDR
A classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains and shows how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class.

Domain Adaptation under Target and Conditional Shift

TLDR
This work considers domain adaptation under three possible scenarios, kernel embedding of conditional as well as marginal distributions, and proposes to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain.

Maximum Mean Discrepancy for Class Ratio Estimation: Convergence Bounds and Kernel Selection

TLDR
This paper investigates the use of maximum mean discrepancy in a reproducing kernel Hilbert space (RKHS) for estimating class ratios in an unlabeled instance collection, and proposes a novel convex formulation that automatically learns the kernel to be employed in the MMD-based estimation.

Class Proportion Estimation with Application to Multiclass Anomaly Rejection

TLDR
This work addresses two classification problems that fall under the heading of domain adaptation, wherein the distributions of training and testing examples differ, and designs a classifier that has the option of assigning a "reject" label, indicating that the instance did not arise from a class present in the training data.

Domain adaptation and sample bias correction theory and algorithm for regression

Detecting and Correcting for Label Shift with Black Box Predictors

TLDR
Black Box Shift Estimation (BBSE) is proposed to estimate the test distribution of p(y) and it is proved BBSE works even when predictors are biased, inaccurate, or uncalibrated, so long as their confusion matrices are invertible.

Robust Classification Under Sample Selection Bias

TLDR
This work develops a framework for learning a robust bias-aware (RBA) probabilistic classifier that adapts to different sample selection biases using a minimax estimation formulation and demonstrates the behavior and effectiveness of the approach on binary classification tasks.

Classification with Asymmetric Label Noise: Consistency and Maximal Denoising

TLDR
This work gives conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable, and argues that this pair corresponds in a certain sense to maximal denoising of the observed distributions.

Active Learning for Cost-Sensitive Classification

TLDR
It is proved COAL can be efficiently implemented for any regression family that admits squared loss optimization; it also enjoys strong guarantees with respect to predictive performance and labeling effort.

Semi-Supervised Novelty Detection

TLDR
It is argued that novelty detection in this semi-supervised setting is naturally solved by a general reduction to a binary classification problem and provides a general solution to the general two-sample problem, that is, the problem of determining whether two random samples arise from the same distribution.