Bayesian measures of model complexity and fit

@article{Spiegelhalter2002BayesianMO,
  title={Bayesian measures of model complexity and fit},
  author={David J. Spiegelhalter and Nicola G. Best and Bradley P. Carlin and Angelika van der Linde},
  journal={Journal of the Royal Statistical Society: Series B (Statistical Methodology)},
  year={2002},
  volume={64}
}
Summary. We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. Using an information theoretic argument we derive a measure pD for the effective number of parameters in a model as the difference between the posterior mean of the deviance and the deviance at the posterior means of the parameters of interest. In general pD approximately corresponds to the trace of the product of Fisher's information and the posterior covariance… 

A Bayesian view of model complexity

This article addresses the problem of formally defining the ‘effective number of parameters’ in a Bayesian model which is assumed to be given by a sampling distribution and a prior distribution for

A Bayesian view of model complexity

This article addresses the problem of formally defining the ‘effective number of parameters’ in a Bayesian model which is assumed to be given by a sampling distribution and a prior distribution for

Bayesian model configuration, selection and averaging in complex regression contexts

A novel MCMC algorithm for the search through the model space via efficient mode jumping for GLMMs is introduced, based on the assumption that marginal likelihoods can be efficiently calculated within each model.

Testing hypotheses via a mixture estimation model

We consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. Our alternative to the traditional construction of posterior probabilities that a given hypothesis is

Information-based inference for singular models and finite sample sizes

An improved approximation for the complexity is introduced which is used to define a new information criterion: the frequentist information criterion (FIC), which extends the applicability of information-based infer- ence to the finite-sample-size regime of regular models and to singular models.

A Bayesian Chi-Squared Test for Goodness of Fit

This article describes an extension of classical x 2 goodness-of-fit tests to Bayesian model assessment. The extension, which essentially involvesevaluating Pearson's goodness-of-fit statistic at a

Measuring the complexity of generalized linear hierarchical models

Measuring a statistical model's complexity is important for model criticism and comparison. However, it is unclear how to do this for hierarchical models due to uncertainty about how to count the

The Focussed Information Criterion

A focussed information criterion for model selection, the FIC, is proposed using an unbiased estimate of limiting risk, and a method which for given focus parameter estimates the precision of any submodel-based estimator is developed.

Predictive Alternatives in Bayesian Model Selection

This thesis presents two new families of information criteria that can be used to perform model comparison and examines the role of priors for estimation and model comparison as well as the role that information theory can play in the latter.

Bayesian Case-deletion Model Complexity and Information Criterion.

A new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model and its properties in linear models are explored.
...

References

SHOWING 1-10 OF 151 REFERENCES

POSTERIOR PREDICTIVE ASSESSMENT OF MODEL FITNESS VIA REALIZED DISCREPANCIES

This paper considers Bayesian counterparts of the classical tests for good- ness of fit and their use in judging the fit of a single Bayesian model to the observed data. We focus on posterior

A note on the generalized information criterion for choice of a model

SUMMARY One way of selecting models is to choose that model for which the maximized log likelihood minus a multiple of the number of parameters estimated is a maximum. This note explores the choice

Inequalities between expected marginal log‐likelihoods, with implications for likelihood‐based model complexity and comparison measures

A multi‐level model allows the possibility of marginalization across levels in different ways, yielding more than one possible marginal likelihood. Since log‐likelihoods are often used in classical

Markov Chain Monte Carlo Methods for Computing Bayes Factors

It is found that the joint model-parameter space search methods perform adequately but can be difficult to program and tune, whereas the marginal likelihood methods often are less troublesome and require less additional coding.

Model selection and multimodel inference : a practical information-theoretic approach

The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference). A philosophy is

MCMC Methods for Computing Bayes Factors: A Comparative Review

It is suggested that the joint model-parameter space search methods perform adequately but can be diicult to program and tune, while the marginal likelihood methods are often less troublesome and require less in the way of additional coding.

Model Selection and Accounting for Model Uncertainty in Graphical Models Using Occam's Window

Abstract We consider the problem of model selection and accounting for model uncertainty in high-dimensional contingency tables, motivated by expert system applications. The approach most used

Bayesian Model Assessment and Comparison Using Cross-Validation Predictive Densities

This work proposes an approach using cross-validation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions, and discusses the probabilistic assumptions made and properties of two practical cross- validate methods, importance sampling and k-fold cross- validation.

Model choice: A minimum posterior predictive loss approach

A predictive criterion where the goal is good prediction of a replicate of the observed data but tempered by fidelity to the observed values is proposed, which is obtained by minimising posterior loss for a given model.

On the Bayesian analysis of population size

SUMMARY We consider the problem of estimating the total size of a population from a series of incomplete census data. We observe that inference is typically highly sensitive to the choice of model
...