Assessing the Calibration of High-Dimensional Ensemble Forecasts Using Rank Histograms

@article{Thorarinsdottir2013AssessingTC,
  title={Assessing the Calibration of High-Dimensional Ensemble Forecasts Using Rank Histograms},
  author={Thordis Linda Thorarinsdottir and Michael Scheuerer and Christoph Heinz},
  journal={Journal of Computational and Graphical Statistics},
  year={2013},
  volume={25},
  pages={105 - 122}
}
Any decision-making process that relies on a probabilistic forecast of future events necessarily requires a calibrated forecast. This article proposes new methods for empirically assessing forecast calibration in a multivariate setting where the probabilistic forecast is given by an ensemble of equally probable forecast scenarios. Multivariate properties are mapped to a single dimension through a prerank function and the calibration is subsequently assessed visually through a histogram of the… 

On assessing calibration of multivariate ensemble forecasts

A necessary condition for optimal use in decision making of probability forecasts in general, and ensemble forecasts in particular, is that they should be calibrated. Ensemble calibration implies

On Evaluation of Ensemble Forecast Calibration Using the Concept of Data Depth

TLDR
To generate correct multivariate rank histograms using the concept of data depth, the datatype of the ensemble should be taken into account to define a proper preranking function.

Generation of Scenarios from Calibrated Ensemble Forecasts with a Dual-Ensemble Copula-Coupling Approach

TLDR
The concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error and a new approach is applied to wind forecasts from the high-resolution Consortium for Small-Scale Modeling.

Beyond univariate calibration: verifying spatial structure in ensembles of forecast fields

TLDR
A new diagnostic tool designed for spatially-indexed ensemble forecast fields based on a level-crossing statistic, the fraction of threshold exceedance (FTE), which yields a projection of a – possibly high-dimensional – multivariate quantity onto a univariate quantity that can be studied with standard tools like verification rank histograms is studied.

Variogram-Based Proper Scoring Rules for Probabilistic Forecasts of Multivariate Quantities*

AbstractProper scoring rules provide a theoretically principled framework for the quantitative assessment of the predictive performance of probabilistic forecasts. While a wide selection of such

New approaches to postprocessing of multi‐model ensemble forecasts

TLDR
A flexible multivariate Bayesian postprocessing framework, based on a directed acyclic graph representing the relationships between the ensembles and the observed weather, is proposed, which is applied to forecasts of surface temperature over the UK during the winter period from 2007-2013.

Comparison of multivariate post-processing methods using global ECMWF ensemble forecasts

An influential step in weather forecasting was the introduction of ensemble forecasts in operational use due to their capability to account for the uncertainties in the future state of the atmosphere.

Calibration tests for multivariate Gaussian forecasts

Forecasts by nature should take the form of probabilistic distributions. Calibration, the statistical consistency of forecast distributions and observations, is a central property of good

On the number of bins in a rank histogram

  • Claudio Heinrich
  • Computer Science, Environmental Science
    Quarterly Journal of the Royal Meteorological Society
  • 2020
TLDR
The goal of the method is to select a number of bins such that the intuitive decision whether a histogram is uniform or not is as close as possible to a formal statistical test.
...

References

SHOWING 1-10 OF 42 REFERENCES

Assessing probabilistic forecasts of multivariate quantities, with an application to ensemble predictions of surface winds

We discuss methods for the evaluation of probabilistic predictions of vector-valued quantities, that can take the form of a discrete forecast ensemble or a density forecast. In particular, we propose

Interpretation of Rank Histograms for Verifying Ensemble Forecasts

Abstract Rank histograms are a tool for evaluating ensemble forecasts. They are useful for determining the reliability of ensemble forecasts and for diagnosing errors in its mean and spread. Rank

Probabilistic forecasts, calibration and sharpness

Summary.  Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of

Using Bayesian Model Averaging to Calibrate Forecast Ensembles

Ensembles used for probabilistic weather forecasting often exhibit a spread-error correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing

Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation

TLDR
This work proposes the use of ensemble model output statistics (EMOS), an easy-to-implement postprocessing technique that addresses both forecast bias and underdispersion and takes into account the spread-skill relationship.

The minimum spanning tree histogram as a verification tool for multidimensional ensemble forecasts

The minimum spanning tree (MST) histogram is a multivariate extension of the ideas behind the conventional scalar rank histogram. It tabulates the frequencies, over n forecast occasions, of the rank

Multivariate probabilistic forecasting using ensemble Bayesian model averaging and copulas

TLDR
This work proposes the use of a Gaussian copula, which offers a simple procedure for recovering the dependence that is lost in the estimation of the ensemble BMA marginals, and shows that it recovers many well‐understood dependencies between weather quantities and subsequently improves calibration and sharpness over both the raw ensemble and a method which does not incorporate joint distributional information.

Copula Calibration

TLDR
Methods and tools used to propose notions of calibration for probabilistic forecasts of general multivariate quantities are illustrated in simulation studies and applied to compare raw numerical model and statistically postprocessed ensemble forecasts of bivariate wind vectors.

Combining dynamical and statistical ensembles

TLDR
A method for dressing ensembles of any size, thus enabling valid comparisons to be made between them, and it is shown that the dressed ECMWFensembles have skill relative to the dressedECMWF best guess, even at the maximum lead time of the ECMWf forecasts.

Uncertainty Quantification in Complex Simulation Models Using Ensemble Copula Coupling

TLDR
It is shown that seemingly unrelated, recent advances can be interpreted, fused and consolidated within the framework of ECC, the common thread being the adoption of the empirical copula of the raw ensemble.