# A Note on the Validity of Cross-Validation for Evaluating Time Series Prediction

@inproceedings{Bergmeir2015ANO, title={A Note on the Validity of Cross-Validation for Evaluating Time Series Prediction}, author={C. Bergmeir and Rob J Hyndman and Bonsoo Koo}, year={2015} }

One of the most widely used standard procedures for model evaluation in classification and regression is K-fold cross-validation (CV. [] Key Result Furthermore, we present a simulation study where we show empirically that K-fold CV performs favourably compared to both OOS evaluation and other time-series-specific techniques such as non-dependent cross-validation.

## 54 Citations

### Optimal Out-of-Sample Forecast Evaluation under Stationarity

- MathematicsSSRN Electronic Journal
- 2021

It is common practice to split time-series into in-sample and pseudo out-of-sample segments and to estimate the out-of-sample loss of a given statistical model by evaluating forecasting performance…

### Machine learning for time series forecasting - a simulation study

- Computer Science
- 2018

Assessment of popular machine learning algorithms for time series prediction tasks reveals that advanced machine learning models are capable of approximating the optimal forecast very closely in the base case, with nonlinear models in the lead across all DGPs - particularly the MLP.

### An Evaluation of Equity Premium Prediction Using Multiple Kernel Learning with Financial Features

- Computer ScienceNeural Processing Letters
- 2019

A forecasting procedure based on multivariate dynamic kernels to re-examine—under a non-linear, kernel methods framework—the experimental tests reported by Welch and Goyal showing that several variables proposed in the finance literature are of no use as exogenous information to predict the equity premium under linear regressions is introduced.

### A GROUP REGULARISATION APPROACH FOR CONSTRUCTING GENERALISED AGE-PERIOD-COHORT MORTALITY PROJECTION MODELS

- Computer ScienceASTIN Bulletin
- 2021

This paper exploits statistical learning tools, namely group regularisation and cross-validation, to provide a robust framework to construct discrete-time mortality models by automatically selecting the most appropriate functions to best describe and forecast particular data sets.

### Granger Causality Testing in High-Dimensional VARs: a Post-Double-Selection Procedure

- Computer Science, Economics
- 2019

An LM test for Granger causality in high-dimensional VAR models based on penalized least squares estimations is developed and a post-double-selection procedure is proposed to partial out the effects of the variables not of interest.

### Predictive and Structural Analysis for High-Dimensional Vector

- Computer Science
- 2020

A new regularization model is proposed that is able to estimate high-dimensional VARs and is shown to produce credible impulse responses and are suitable for structural analysis.

### Macroeconomic forecasting for Australia using a large number of predictors

- EconomicsInternational Journal of Forecasting
- 2019

### Penalized Estimation of Panel Vector Autoregressive Models : A Lasso Approach

- Computer Science, Economics
- 2017

Simulation results point towards advantages of using lasso for PVARs over OLS, standard lasso techniques as well as Bayesian estimators in terms of mean squared errors and forecast accuracy.

### POINT AND INTERVAL FORECASTS OF DEATH RATES USING NEURAL NETWORKS

- Computer Science, EconomicsASTIN Bulletin
- 2021

A convolutional neural network architecture for mortality rate forecasting is proposed, empirically compare this model as well as other NN models to the Lee–Carter model and it is found that lower forecast errors are achievable for many countries in the Human Mortality Database.

## References

SHOWING 1-10 OF 26 REFERENCES

### On the usefulness of cross-validation for directional forecast evaluation

- Computer ScienceComput. Stat. Data Anal.
- 2014

### Cross Validation of Prediction Models for Seasonal Time Series by Parametric Bootstrapping

- Computer Science
- 2016

Out-of-sample prediction for the final portion of a sample is a popular tool for model selection in model-based forecasting. We suggest to add a simulation step to this exercise, where pseudo-samples…

### DATA‐DEPENDENT ESTIMATION OF PREDICTION FUNCTIONS

- Computer Science
- 1992

It is argued that cross-validation works, unaltered, in this more general setting where the observations have martingale-like structure and an estimate of the one-step prediction function of this process is selected from a collection of splines by minimizing the cross- validatory version of the prediction error.

### Consistent cross-validatory model-selection for dependent data: hv-block cross-validation

- Computer Science
- 2000

### A cross-validatory method for dependent data

- Computer Science
- 1994

The technique of cross-validation is extended to the case where observations form a general stationary sequence, and taking h to be a fixed fraction of the sample size is proposed to reduce the training set by removing the h observations preceding and following the observation in the test set.

### Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods

- Computer ScienceComput. Stat. Data Anal.
- 2010

### Density-Preserving Sampling: Robust and Efficient Alternative to Cross-Validation for Error Estimation

- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2013

The correntropy-inspired density-preserving sampling (DPS) procedure is derived and its usability and performance is investigated using a set of public benchmark datasets and standard classifiers.

### Study on the Impact of Partition-Induced Dataset Shift on $k$-Fold Cross-Validation

- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2012

From the experimental results obtained, it is concluded that the degree of partition-induced covariate shift depends on the cross-validation scheme considered, and worse schemes may harm the correctness of a single-classifier performance estimation and also increase the needed number of repetitions of cross- validation to reach a stable performance estimation.

### Nonparametric Regression with Correlated Errors

- Mathematics
- 2001

Nonparametric regression techniques are often sensitive to the presence of correlation in the errors. The practical consequences of this sensitivityare explained, including the breakdown of several…