# A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods

@article{Burman1989ACS, title={A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods}, author={Prabir Burman}, journal={Biometrika}, year={1989}, volume={76}, pages={503-514} }

SUMMARY Concepts of v-fold cross-validation and repeated learning-testing methods have been introduced here. In many problems, these methods are computationally much less expensive than ordinary cross-validation and can be used in its place. A comparative study of these three methods has been carried out in detail.

## Tables from this paper

## 548 Citations

Some Issues in Cross-Validation

- Computer Science
- 1991

A new type of cross- validation is proposed here for model selection problems when the data is generated by a stationary process, which is an off-shoot of both ordinary cross-validation and v-fold cross- validation.

Cross-validation as the objective function for variable-selection techniques

- Computer Science
- 2003

It is shown that the commonly applied leave-one-out cross-validation has a strong tendency to overfitting, underestimates the true prediction error, and should not be used without further constraints or further validation.

An assessment of ten-fold and Monte Carlo cross validations for time series forecasting

- Computer Science2013 10th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE)
- 2013

Experimental results, using time series of the NN3 tournament, found that Monte Carlo cross validation is more stable than ten-fold cross validation for selecting the best forecasting model.

Multiple predicting K-fold cross-validation for model selection

- Mathematics
- 2018

ABSTRACT K-fold cross-validation (CV) is widely adopted as a model selection criterion. In K-fold CV, folds are used for model construction and the hold-out fold is allocated to model validation.…

On the Use of K-Fold Cross-Validation to Choose Cutoff Values and Assess the Performance of Predictive Models in Stepwise Regression

- 2009

This paper addresses a methodological technique of leave-many-out cross-validation for choosing cutoff values in stepwise regression methods for simplifying the final regression model. A practical…

Estimation of prediction error by using K-fold cross-validation

- Computer ScienceStat. Comput.
- 2011

This paper investigates two families that connect the training error and K-fold cross-validation, which has a downward bias and has an upward bias.

On optimal data split for generalization estimation and model selection

- Computer ScienceNeural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
- 1999

The theoretical basics of various cross-validation techniques are described with the purpose of reliably estimating the generalization error and optimizing the model structure for reliably estimating a single location parameter.

A survey of cross-validation procedures for model selection

- Computer Science, Mathematics
- 2010

This survey intends to relate the model selection performances of cross-validation procedures to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results.

An empirical comparison of $$V$$V-fold penalisation and cross-validation for model selection in distribution-free regression

- Computer Science, MathematicsPattern Analysis and Applications
- 2014

Cases in which VFCV and $$V$$V-fold penalisation provide poor estimates of the risk, respectively, are highlighted, and a modified penalisation technique is introduced to reduce the estimation error.

Targeted Cross-Validation

- Computer Science, MathematicsArXiv
- 2021

This work proposes a targeted cross-validation (TCV) to select models or procedures based on a general weighted L2 loss and shows that the TCV is consistent in selecting the best performing candidate under the weighted L1 loss.

## References

SHOWING 1-10 OF 18 REFERENCES

An alternative method of cross-validation for the smoothing of density estimates

- Mathematics
- 1984

Cross-validation with Kullback-Leibler loss function has been applied to the choice of a smoothing parameter in the kernel method of density estimation. A framework for this problem is constructed…

The Predictive Sample Reuse Method with Applications

- Computer Science
- 1975

A recently devised method of prediction based on sample reuse techniques that is most useful in low structure data paradigms that involve minimal assumptions is presented.

Classification and Regression Trees

- Mathematics, Computer Science
- 1983

This chapter discusses tree classification in the context of medicine, where right Sized Trees and Honest Estimates are considered and Bayes Rules and Partitions are used as guides to optimal pruning.

Generalized $L-, M-$, and $R$-Statistics

- Mathematics
- 1984

Abstract : A class of statisticss generalizing U-statistics and L-statistics, and containing other varieties of statistics as well, such as trimmed U-statistics, is studied. Using the differentiable…

Estimating Optimal Transformations for Multiple Regression and Correlation.

- Mathematics
- 1985

Abstract In regression analysis the response variable Y and the predictor variables X 1 …, Xp are often replaced by functions θ(Y) and O1(X 1), …, O p (Xp ). We discuss a procedure for estimating…

Optimal Bandwidth Selection in Nonparametric Regression Function Estimation

- Mathematics
- 1985

On considere des estimateurs du noyau d'une fonction de regression multivariable et une regle de selection selon la largeur de bande formulee en terme de validation croisee

Jackknife Approximations to Bootstrap Estimates

- Mathematics
- 1984

Let T be an estimate of the form Tn = T(F ) where F is the nn n' n sample cdf of n iid observations and T is a locally quadratic functional defined on cdf's. Then, the normalized jackknife estimates…

Approximation Theorems of Mathematical Statistics

- Mathematics
- 1980

Preliminary Tools and Foundations. The Basic Sample Statistics. Transformations of Given Statistics. Asymptotic Theory in Parametric Inference. U--Statistics. Von Mises Differentiable Statistical…

Estimation of optimal transformations using D-fold cross validation and repeated learning-testing methods

- Sankhya A 51. To appear. GEISSER,
- 1989

Estimation of optimal transformations using D-fold cross validation and repeated learning-testing methods. Sankhya A 51

- Estimation of optimal transformations using D-fold cross validation and repeated learning-testing methods. Sankhya A 51
- 1989