• Corpus ID: 232478806

Model Selection for Time Series Forecasting: Empirical Analysis of Different Estimators

@article{Cerqueira2021ModelSF,
  title={Model Selection for Time Series Forecasting: Empirical Analysis of Different Estimators},
  author={V{\'i}tor Cerqueira and Lu{\'i}s Torgo and Carlos Soares},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.00584}
}
Evaluating predictive models is a crucial task in predictive analytics. This process is especially challenging with time series data where the observations show temporal dependencies. Several studies have analysed how different performance estimation methods compare with each other for approximating the true loss incurred by a given forecasting model. However, these studies do not address how the estimators behave for model selection: the ability to select the best solution among a set of… 

Figures and Tables from this paper

AutoForecast: Automatic Time-Series Forecasting Model Selection

A forecasting meta-learning approach called AutoForecast that allows for the quick inference of the best time-series forecasting model for an unseen dataset and learns both forecasting models performances over time horizon of same dataset and task similarity across different datasets.

References

SHOWING 1-10 OF 35 REFERENCES

A survey of cross-validation procedures for model selection

This survey intends to relate the model selection performances of cross-validation procedures to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results.

Evaluating time series forecasting models: an empirical study on performance estimation methods

This paper presents an extensive empirical study which compares different performance estimation methods for time series forecasting tasks, including variants of cross-validation, out-of-sample (holdout), and prequential approaches.

On the use of cross-validation for time series predictor evaluation

A note on the validity of cross-validation for evaluating autoregressive time series prediction

Comparison of statistical and machine learning methods for daily SKU demand forecasting

Comparing the forecasting performance of various ML methods, trained both in a series-by-series and a cross-learning fashion, to that of statistical methods using a large set of real daily SKU demand data indicates that some ML methods do provide better forecasts, both in terms of accuracy and bias.

Machine Learning vs Statistical Methods for Time Series Forecasting: Size Matters

Using a learning curve method, the results suggest that machine learning methods improve their relative predictive performance as the sample size grows.

Out-of-sample tests of forecasting accuracy: an analysis and review

Answers to Your Forecasting Questions

[Q ] What is the difference in a demand planner's role when working for a distributor as opposed to working for a manufacturer? Also, from a job perspective, which one is better?[ A ] There is not

On Cross-Validation for Predictor Evaluation in Time Series

In the context of the prediction error method for one step ahead prediction in a single time series, a conventional and two cross-validatory procedures are proposed for prediction of squared