Consistent cross-validatory model-selection for dependent data: hv-block cross-validation

  title={Consistent cross-validatory model-selection for dependent data: hv-block cross-validation},
  author={Jeffrey S. Racine},
  journal={Journal of Econometrics},
  • J. Racine
  • Published 1 November 2000
  • Computer Science
  • Journal of Econometrics

Tables from this paper

Markov cross-validation for time series model evaluations
Generalised correlated cross-validation
This work proposes an extension to GCV in the context of correlated errors, which is motivated by a natural definition for residual degrees of freedom and a potential maximum likelihood framework for Gaussian random processes.
Generalized Cross-Validation for Correlated Data ( GCV c )
An extension to GCV is proposed in the context of correlated errors that has important implications about the definition for residual degrees of freedom, even in the independent case and a potential maximum likelihood framework.
On the usefulness of cross-validation for directional forecast evaluation
Far Casting Cross-Validation
FCCV withholds correlated neighbors in every aspect of the cross-validation procedure and is a technique that stresses a fitted model’s ability to extrapolate rather than interpolate, which generally leads to better model selection in correlated datasets.
A Note on the Validity of Cross-Validation for Evaluating Time Series Prediction
It is shown that the particular setup in which time series forecasting is usually performed using Machine Learning methods renders the use of standard K-fold CV possible and empirically that K- fold CV performs favourably compared to both OOS evaluation and other time-series-specific techniques such as non-dependent cross-validation.
A comparison of machine learning model validation schemes for non-stationary time series data
A study design that perturbs global stationarity by introducing a slow evolution of the underlying data-generating process is introduced and the practical significance in a replication study of a statistical arbitrage problem is demonstrated.
hv-Block Cross Validation is not a BIBD: a Note on the Paper by Jeff Racine (2000)
This note demonstrates that this is not the case, and thus the theoretical consistency of $hv$-block remains an open question, and provides a Python program counting the number of occurrences of each sample and each pair of samples.


Feasible Cross-Validatory Model Selection for General Stationary Processes
It is shown that the h-block cross-validation function for least-squares based estimators can be expressed in a form which enormously impact on the amount of calculation required.
Linear Model Selection by Cross-validation
Abstract We consider the problem of selecting a model having the best predictive ability among a class of linear models. The popular leave-one-out cross-validation method, which is asymptotically
A cross-validatory method for dependent data
The technique of cross-validation is extended to the case where observations form a general stationary sequence, and taking h to be a fixed fraction of the sample size is proposed to reduce the training set by removing the h observations preceding and following the observation in the test set.
It is argued that cross-validation works, unaltered, in this more general setting where the observations have martingale-like structure and an estimate of the one-step prediction function of this process is selected from a collection of splines by minimizing the cross- validatory version of the prediction error.
Model Selection Via Multifold Cross Validation
Two notions of multi-fold cross validation (MCV and MCV*) criteria are considered and it turns out that MCV indeed reduces the chance of overfitting.
A new look at the statistical model identification
The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as
Bootstrap Model Selection
Abstract In a regression problem, typically there are p explanatory variables possibly related to a response variable, and we wish to select a subset of the p explanatory variables to fit a model
A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods
Concepts of v-fold cross-validation and repeated learning-testing methods are introduced here and are computationally much less expensive than ordinary cross- validation and can be used in its place in many problems.
The Relationship Between Variable Selection and Data Agumentation and a Method for Prediction
It is shown that data augmentation provides a rather general formulation for the study of biased prediction techniques using multiple linear regression and a way to obtain predictors given a credible criterion of good prediction is proposed.