Proper Scoring Rules for Evaluating Asymmetry in Density Forecasting

@article{Iacopini2020ProperSR,
  title={Proper Scoring Rules for Evaluating Asymmetry in Density Forecasting},
  author={Matteo Iacopini and Francesco Ravazzolo and Luca Rossini},
  journal={DecisionSciRN: Decision-Making \& Forecasting (Topic)},
  year={2020}
}
This paper proposes a novel asymmetric continuous probabilistic score (ACPS) for evaluating and comparing density forecasts. It extends the proposed score and defines a weighted version, which emphasizes regions of interest, such as the tails or the center of a variable's range. A test is also introduced to statistically compare the predictive ability of different forecasts. The ACPS is of general use in any situation where the decision maker has asymmetric preferences in the evaluation of the… 

On a Class of Objective Priors from Scoring Rules (with Discussion)

This paper proposes to take a novel look at the construction of objective prior distributions, where the connection with a chosen sampling distribution model is removed and produces a class of priors that can be employed in scenarios where the usual model based priors fail, such as mixture models and model selection via Bayes factors.

References

SHOWING 1-10 OF 49 REFERENCES

Density Forecasting

Evaluating probabilities: asymmetric scoring rules

Proper scoring rules are over evaluation measures that reward accurate probabilities Specific rules encountered in the literature and used in practice are invariably symmetric in the sense that the

Scoring Rules for Continuous Probability Distributions

Personal, or subjective, probabilities are used as inputs to many inferential and decision-making models, and various procedures have been developed for the elicitation of such probabilities.

Elicitation of Personal Probabilities and Expectations

Abstract Proper scoring rules, i.e., devices of a certain class for eliciting a person's probabilities and other expectations, are studied, mainly theoretically but with some speculations about

Making and Evaluating Point Forecasts

Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, with the absolute error and the squared error being key examples. The individual

Measure and integration theory, Volume 26

  • 2011

Diverging Tests of Equal Predictive Ability

We investigate claims made in Giacomini and White (2006) and Diebold (2015) regarding the asymptotic normality of a test of equal predictive ability. A counterexample is provided in which, instead,

Comparing Forecast Performance with State Dependence

We propose a novel forecast comparison methodology to evaluate models’ relative forecasting performance when the latter is a state-dependent function of economic variables. In our bench¬mark case,

Large Time-Varying Volatility Models for Electricity Prices

We study the importance of time-varying volatility in modelling hourly electricity prices when fundamental drivers are included in the estimation. This allows us to contribute to the literature of

Nowcasting Tail Risks to Economic Activity with Many Indicators

This paper focuses on tail risk nowcasts of economic activity, measured by GDP growth, with a potentially wide array of monthly and weekly information. We consider different models (Bayesian mixed