Class Prior Estimation under Covariate Shift - no Problem?

@article{Tasche2022ClassPE,
  title={Class Prior Estimation under Covariate Shift - no Problem?},
  author={Dirk Tasche},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.02449}
}
  • Dirk Tasche
  • Published 6 June 2022
  • Computer Science, Mathematics
  • ArXiv
. We show that in the context of classification the property of source and target distributions to be related by covariate shift may break down when the information content captured in the covariates is reduced, for instance by discretization of the covariates, dropping some of them, or by any transformation of the covariates even if it is domain-invariant. The consequences of this observation for class prior estimation under covariate shift are discussed. A probing algorithm as alternative… 
1 Citations

Figures from this paper

Factorizable Joint Shift in Multinomial Classification

  • Dirk Tasche
  • Computer Science, Mathematics
    Machine Learning and Knowledge Extraction
  • 2022
For the multinomial (multiclass) classification setting, a representation of factorizable joint shift is derived in terms of the source (training) distribution, the target (test) prior class probabilities and the target marginal distribution of the features.

References

SHOWING 1-10 OF 31 REFERENCES

Probability theory - a comprehensive course

Convergence Theorems are applied to the interpretation of Brownian Motion and the law of the Iterated Logarithm as well as to Martingales and Exchangeability.

Support and Invertibility in Domain-Invariant Representations

This work gives generalization bounds for unsupervised domain adaptation that hold for any representation function by acknowledging the cost of non-invertibility and proposes a bound based on measuring the extent to which the support of the source domain covers the target domain.

When Training and Test Sets Are Different: Characterizing Learning Transfer

This chapter contains sections titled: Introduction, Conditional and Generative Models, real- life Reasons for Dataset Shift, Real-Life Reasons for dataset shift, Simple Covariate Shift, Prior Probability Shift, Sample Selection Bias, Imbalanced Data, Domain Shift, Source Component Shift and Gaussian Process Methods.

A Generalized Neyman-Pearson Criterion for Optimal Domain Adaptation

This work studies a class of domain adaptation problems that generalizes both the covariate shift assumption and a model for feature-dependent label noise, and establishes optimal classification on the target domain despite not having access to labelled data from this domain.

The Importance of Calibration for Estimating Proportions from Annotations

This paper identifies and differentiate between two relevant data generating scenarios (intrinsic vs. extrinsic labels), introduces a simple but novel method which emphasizes the importance of calibration, and analyzes and experimentally validate the appropriateness of various methods for each of the two scenarios.

Minimising quantifier variance under prior probability shift

It is found that the asymptotic variance of the maximum likelihood estimator is a function of the Brier score for the regression of the class label against the features under the test data set distribution, which suggests that optimising the accuracy of a base classifier on the training data set helps to reduce thevariance of the related quantifiers on the testData set.

Exact Fit of Simple Finite Mixture Models

It is pointed out that the maximum-likelihood (ML) approach to fitting the mixture distribution not only gives an optimum but even an exact fit if the authors allow the mixture components to vary but keep their density ratio fixed.

Domain Adaptation with Conditional Transferable Components

This paper aims to extract conditional transferable components whose conditional distribution is invariant after proper location-scale (LS) transformations, and identifies how P(Y) changes between domains simultaneously.

Calibrating sufficiently

The probing reduction approach is revisited and it is found that it produces an estimator of probabilistic classifiers that reduces grouping loss and identifies comonotonicity as a useful criterion for sufficiency.

Domain Adaptation with Factorizable Joint Shift

This paper proposes a new assumption, Factorizable Joint Shift (FJS), to handle the co-existence of sampling bias in covariates and labels, and provides theoretical and empirical understandings about when FJS degenerates to prior assumptions and when it is necessary.