• Corpus ID: 15472933

Learning under Non-stationarity : Covariate Shift Adaptation by Importance Weighting Masashi Sugiyama

@inproceedings{Hrdle2013LearningUN,
  title={Learning under Non-stationarity : Covariate Shift Adaptation by Importance Weighting Masashi Sugiyama},
  author={Wolfgang Karl H{\"a}rdle},
  year={2013}
}
The goal of supervised learning is to estimate an underlying input-output function from its input-output training samples so that output values for unseen test input points can be predicted. A common assumption in supervised learning is that the training input points follow the same probability distribution as the test input points. However, this assumption is not satisfied, for example, when outside of the training region is extrapolated. The situation where the training and test input points… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 95 REFERENCES
Covariate Shift Adaptation by Importance Weighted Cross Validation
TLDR
This paper proposes a new method called importance weighted cross validation (IWCV), for which its unbiasedness even under the covariate shift is proved, and the IWCV procedure is the only one that can be applied for unbiased classification under covariates.
Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation
TLDR
This work proposes a novel method that allows us to directly estimate the importance from samples without going through the hard task of density estimation, and demonstrates that the proposed method is computationally more efficient than existing approaches with comparable accuracy.
Input-dependent estimation of generalization error under covariate shift
TLDR
This paper proposes an alternative estimator of the generalization error for the squared loss function when training and test distributions are different and is shown to be exactly unbiased for finite samples if the learning target function is realizable and asymptotically unbiased in general.
Direct importance estimation for covariate shift adaptation
TLDR
This paper proposes a direct importance estimation method that does not involve density estimation and is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized.
Machine Learning in Non-Stationary Environments - Introduction to Covariate Shift Adaptation
TLDR
This book focuses on a specific non-stationary environment known as covariate shift, in which the distributions of inputs and outputs change but the conditional distribution of outputs is unchanged, and presents machine learning theory, algorithms, and applications to overcome this variety of non- stationarity.
Dataset Shift in Machine Learning
TLDR
This volume offers an overview of current efforts to deal with dataset and covariate shift, and places dataset shift in relationship to transfer learning, transduction, local learning, active learning, and semi-supervised learning.
Dependence Minimizing Regression with Model Selection for Non-Linear Causal Inference under Non-Gaussian Noise
TLDR
This paper proposes a novel causal inference algorithm called least-squares independence regression (LSIR), which learns the additive noise model through minimization of an estimator of the squared-loss mutual information between inputs and residuals.
Active Learning in Approximately Linear Regression Based on Conditional Expectation of Generalization Error
TLDR
This paper proposes a new active learning method also using the weighted least-squares learning, which it proves that the proposed active learning criterion is a more accurate predictor of the single-trial generalization error than the existing criterion.
Subspace Information Criterion for Model Selection
TLDR
A new criterion for model selection, the subspace information criterion (SIC), which is a generalization of Mallows's CL, is proposed, which gives an unbiased estimate of the generalization error of the learning target function.
...
...