Corpus ID: 237513842

Targeted Cross-Validation

@article{Zhang2021TargetedC,
  title={Targeted Cross-Validation},
  author={Jiawei Zhang and Jie Ding and Yuhong Yang},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.06949}
}
In many applications, we have access to the complete dataset but are only interested in the prediction of a particular region of predictor variables. A standard approach is to find the globally best modeling method from a set of candidate methods. However, it is perhaps rare in reality that one candidate method is uniformly better than the others. A natural approach for this scenario is to apply a weighted L2 loss in performance assessment to reflect the region-specific interest. We propose a… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 36 REFERENCES
Cross-Validation With Confidence
  • Jing Lei
  • Mathematics, Computer Science
  • Journal of the American Statistical Association
  • 2019
TLDR
This work develops a novel statistically principled inference tool based on cross-validation that takes into account the uncertainty in the testing sample, and outputs a set of highly competitive candidate models containing the optimal one with guaranteed probability. Expand
Linear Model Selection by Cross-validation
Abstract We consider the problem of selecting a model having the best predictive ability among a class of linear models. The popular leave-one-out cross-validation method, which is asymptoticallyExpand
Cross-validation for selecting a model selection procedure
TLDR
Results are provided on how to apply CV to consistently choose the best method, yielding new insights and guidance for potentially vast amount of application. Expand
CONSISTENCY OF CROSS VALIDATION FOR COMPARING REGRESSION PROCEDURES
Theoretical developments on cross validation (CV) have mainly focused on selecting one among a list of finite-dimensional models (e.g., subset or order selection in linear regression) or selecting aExpand
Choice of V for V-Fold Cross-Validation in Least-Squares Density Estimation
TLDR
A non-asymptotic oracle inequality is proved for V-fold cross-validation and its bias-corrected version (V-fold penalization), implying that V- fold penalization is asymptotically optimal in the nonparametric case. Expand
Model Selection Via Multifold Cross Validation
In model selection, it is known that the simple leave one out cross validation method is apt to select overfitted models. In an attempt to remedy this problem, we consider two notions of multi-foldExpand
The restricted consistency property of leave-nv-out cross-validation for high-dimensional variable selection
Cross-validation (CV) methods are popular for selecting the tuning parameter in the high-dimensional variable selection problem. We show the misalignment of the CV is one possible reason of itsExpand
Optimal cross-validation in density estimation with the $L^{2}$-loss
We analyze the performance of cross-validation (CV) in the density estimation framework with two purposes: (i) risk estimation and (ii) model selection. The main focus is given to the so-calledExpand
A survey of cross-validation procedures for model selection
TLDR
This survey intends to relate the model selection performances of cross-validation procedures to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results. Expand
LOCALIZED MODEL SELECTION FOR REGRESSION
Research on model/procedure selection has focused on selecting a single model globally. In many applications, especially for high-dimensional or complex data, however, the relative performance of theExpand
...
1
2
3
4
...