• Corpus ID: 15589295

A Note on the Validity of Cross-Validation for Evaluating Time Series Prediction

  title={A Note on the Validity of Cross-Validation for Evaluating Time Series Prediction},
  author={C. Bergmeir and Rob J Hyndman and Bonsoo Koo},
One of the most widely used standard procedures for model evaluation in classification and regression is K-fold cross-validation (CV. [] Key Result Furthermore, we present a simulation study where we show empirically that K-fold CV performs favourably compared to both OOS evaluation and other time-series-specific techniques such as non-dependent cross-validation.

Figures and Tables from this paper

A Comparative Study of Performance Estimation Methods for Time Series Forecasting

Empirical experiments suggest that cross-validation approaches can be applied to stationary synthetic time series and the most accurate estimates are produced by the out-of-sample methods, which preserve the temporal order of observations.

Markov cross-validation for time series model evaluations

Optimal Out-of-Sample Forecast Evaluation under Stationarity

It is common practice to split time-series into in-sample and pseudo out-of-sample segments and to estimate the out-of-sample loss of a given statistical model by evaluating forecasting performance

Machine learning for time series forecasting - a simulation study

Assessment of popular machine learning algorithms for time series prediction tasks reveals that advanced machine learning models are capable of approximating the optimal forecast very closely in the base case, with nonlinear models in the lead across all DGPs - particularly the MLP.

An Evaluation of Equity Premium Prediction Using Multiple Kernel Learning with Financial Features

A forecasting procedure based on multivariate dynamic kernels to re-examine—under a non-linear, kernel methods framework—the experimental tests reported by Welch and Goyal showing that several variables proposed in the finance literature are of no use as exogenous information to predict the equity premium under linear regressions is introduced.


This paper exploits statistical learning tools, namely group regularisation and cross-validation, to provide a robust framework to construct discrete-time mortality models by automatically selecting the most appropriate functions to best describe and forecast particular data sets.

Granger Causality Testing in High-Dimensional VARs: a Post-Double-Selection Procedure

An LM test for Granger causality in high-dimensional VAR models based on penalized least squares estimations is developed and a post-double-selection procedure is proposed to partial out the effects of the variables not of interest.

Predictive and Structural Analysis for High-Dimensional Vector

A new regularization model is proposed that is able to estimate high-dimensional VARs and is shown to produce credible impulse responses and are suitable for structural analysis.



On the use of cross-validation for time series predictor evaluation

On the usefulness of cross-validation for directional forecast evaluation

Cross Validation of Prediction Models for Seasonal Time Series by Parametric Bootstrapping

Out-of-sample prediction for the final portion of a sample is a popular tool for model selection in model-based forecasting. We suggest to add a simulation step to this exercise, where pseudo-samples


It is argued that cross-validation works, unaltered, in this more general setting where the observations have martingale-like structure and an estimate of the one-step prediction function of this process is selected from a collection of splines by minimizing the cross- validatory version of the prediction error.

A cross-validatory method for dependent data

The technique of cross-validation is extended to the case where observations form a general stationary sequence, and taking h to be a fixed fraction of the sample size is proposed to reduce the training set by removing the h observations preceding and following the observation in the test set.

Density-Preserving Sampling: Robust and Efficient Alternative to Cross-Validation for Error Estimation

  • M. BudkaB. Gabrys
  • Computer Science
    IEEE Transactions on Neural Networks and Learning Systems
  • 2013
The correntropy-inspired density-preserving sampling (DPS) procedure is derived and its usability and performance is investigated using a set of public benchmark datasets and standard classifiers.

Study on the Impact of Partition-Induced Dataset Shift on $k$-Fold Cross-Validation

From the experimental results obtained, it is concluded that the degree of partition-induced covariate shift depends on the cross-validation scheme considered, and worse schemes may harm the correctness of a single-classifier performance estimation and also increase the needed number of repetitions of cross- validation to reach a stable performance estimation.

A survey of cross-validation procedures for model selection

This survey intends to relate the model selection performances of cross-validation procedures to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results.