Tests of Conditional Predictive Ability

@inproceedings{Giacomini2003TestsOC,
  title={Tests of Conditional Predictive Ability},
  author={Raffaella Giacomini and Halbert L. White},
  year={2003}
}
We argue that the current framework for predictive ability testing (e.g., West, 1996) is not necessarily useful for real-time forecast selection, i.e., for assessing which of two competing forecasting methods will perform better in the future. [] Key Method We capture important determinants of forecast performance that are neglected in the existing literature by evaluating what we call the forecasting method (the model and the parameter estimation procedure), rather than just the forecasting model. Compared…
Nonparametric Bootstrap Procedures for Predictive Inference Based on Recursive Estimation Schemes
Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where
Can Two Forecasts Have the Same Conditional Expected Accuracy
The method for testing equal predictive accuracy for pairs of forecasting models proposed by Giacomini and White (2006) has found widespread use in empirical work. The procedure assumes that the
A Bunch of Models, a Bunch of Nulls and Inference About Predictive Ability
TLDR
A simple methodology to test the null hypothesis of equal predictive ability between two families of forecasting methods is presented and it is shown that comparing families of models using the usual approach based on pairwise comparisons of the best ex-post performing models in each family, may lead to conclusions that are at odds with those suggested by the approach.
Tests of Equal Forecast Accuracy for Overlapping Models
This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989). Two models are
Conditional Superior Predictive Ability
This article proposes a test for the conditional superior predictive ability (CSPA) of a family of forecasting methods with respect to a benchmark. The test is functional in nature: under the null
How Far Can We Forecast? Statistical Tests of the Predictive Content
Forecasts are useless whenever the forecast error variance fails to be smaller than the unconditional variance of the target variable. This paper develops tests for the null hypothesis that forecasts
Testing Conditional Predictive Ability
This chapter discusses extensions of out-of-sample predictive ability testing to environments in which the predictive ability of a forecast model could depend on observables or be timevarying. The
Comparing Forecast Accuracy: A Monte Carlo Investigation
The size and power properties of several tests of equal Mean Square Prediction Error (MSPE) and of Forecast Encompassing (FE) are evaluated, using Monte Carlo simulations, in the context of dynamic
...
...

References

SHOWING 1-10 OF 57 REFERENCES
Comparing Density Forecasts via Weighted Likelihood Ratio Tests
We propose a test for comparing the out-of-sample accuracy of competing density forecasts of a variable. The test is valid under general conditions: The data can be heterogeneous and the forecasts
An Out of Sample Test for Granger Causality
Granger (1980) summarizes his personal viewpoint on testing for causality, and outlines what he considers to be a useful operational version of his original definition of causality (Granger (1969)),
Tests of Equal Forecast Accuracy and Encompassing for Nested Models
We examine the asymptotic and finite-sample properties of tests for equal forecast accuracy and encompassing applied to 1-step ahead forecasts from nested parametric models. We first derive the
OUT-OF-SAMPLE TESTS FOR GRANGER CAUSALITY
Clive W.J. Granger has summarized his personal viewpoint on testing for causality in numerous articles over the past 30 years and has outlined what he considers to be a useful operational version of
Estimation, inference, and specification analysis
TLDR
The underlying motivation for maximum-likelihood estimation is explored, the interpretation of the MLE for misspecified probability models is treated, and the conditions under which parameters of interest can be consistently estimated despite misspecification are given.
Robust out-of-sample inference
Finite‐sample properties of tests for equal forecast accuracy
This study examines the small-sample properties of some commonly used tests of equal forecast accuracy. The paper considers the size and power of different tests and the performance of different
Progressive Modeling of Macroeconomic Time Series The LSE Methodology
Econometric models, large and small, have played an increasingly important role in macroeconomic forecasting and policy analysis. However, there is a wide range of model types used for this purpose,
...
...