Bayesian Model Assessment and Comparison Using Cross-Validation Predictive Densities

@article{Vehtari2002BayesianMA,
  title={Bayesian Model Assessment and Comparison Using Cross-Validation Predictive Densities},
  author={Aki Vehtari and Jouko Lampinen},
  journal={Neural Computation},
  year={2002},
  volume={14},
  pages={2439-2468}
}
In this work, we discuss practical methods for the assessment, comparison, and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its future predictive capability by estimating expected utilities. Instead of just making a point estimate, it is important to obtain the distribution of the expected utility estimate because it describes the uncertainty in the estimate. The distributions of the expected utility estimates can also be… Expand
Expected Utility Estimation via Cross-Validation
We discuss practical methods for the assessment, comparison and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its futureExpand
Expected utility estimation via cross-validation
SUMMARY We discuss practical methods for the assessment, comparison and selection of complex hierarchical Bayesian models. A natural way to assess the goodness of the model is to estimate its futureExpand
Bayesian Leave-One-Out Cross-Validation Approximations for Gaussian Latent Variable Models
TLDR
This article considers Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation and finds the approach based upon a Gaussian approximation to the LOO marginal distribution gives the most accurate and reliable results among the fast methods. Expand
Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC
TLDR
An efficient computation of LOO is introduced using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights, and it is demonstrated that PSIS-LOO is more robust in the finite case with weak priors or influential observations. Expand
Bayesian Input Variable Selection Using Posterior Probabilities and Expected Utilities
TLDR
Benefits of using expected utilities for input variable selection in complex Bayesian hierarchical models are that it is less sensitive to prior choices and it provides useful model assessment, and it helps finding useful models. Expand
Bayesian nonparametric model selection and model testing
Abstract This article examines a Bayesian nonparametric approach to model selection and model testing, which is based on concepts from Bayesian decision theory and information theory. The approachExpand
Model selection via predictive explanatory power
TLDR
This work proposes a model selection method based on Kullback-Leibler divergence from the predictive distribution of the full model to the predictive distributions of the submodels, and compares the performance of the method to posterior probabilities, deviance information criteria, and direct maximization of the expected utility via crossvalidation. Expand
Erratum to: Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC
TLDR
An efficient computation of LOO is introduced using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights, and it is demonstrated that PSIS-LOO is more robust in the finite case with weak priors or influential observations. Expand
Bayesian Input Variable Selection Using Cross-Validation Predictive Densities and Reversible Jump MCMC
We consider the problem of input variable selection of a Bayesian model. With suitable priors it is possible to have a large number of input variables in Bayesian models, as less relevant inputs canExpand
On the marginal likelihood and cross-validation
In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate model fit as it quantifies the joint probability of the data under the prior. In contrast,Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 107 REFERENCES
Bayesian model assessment and selection using expected utilities
TLDR
This work proposes an approach using cross-validation predictive densities to compute the expected utilities and Bayesian bootstrap to obtain samples from their distributions, and proposes to use a variable dimension Markov chain Monte Carlo method to find out potentially useful input combinations. Expand
On Bayesian Model Asessment and Choice Using Cross-Validation Predictive Densities
TLDR
It is demonstrated that in flexible non-linear models having many parameters, the importance sampling approximated leave-one-out cross-validation (IS-LOO-CV) proposed in (Gelfand et al., 1992) may not work. Expand
Posterior Predictive Assessment of Model Fitnessvia Realized
This paper considers the Bayesian counterparts of the classical tests for goodness of t and their use in judging the t of a single Bayesian model to the observed data. We focus on posteriorExpand
Bayesian Input Variable Selection Using Cross-Validation Predictive Densities and Reversible Jump MCMC
We consider the problem of input variable selection of a Bayesian model. With suitable priors it is possible to have a large number of input variables in Bayesian models, as less relevant inputs canExpand
POSTERIOR PREDICTIVE ASSESSMENT OF MODEL FITNESS VIA REALIZED DISCREPANCIES
This paper considers Bayesian counterparts of the classical tests for good- ness of fit and their use in judging the fit of a single Bayesian model to the observed data. We focus on posteriorExpand
Approximate Bayesian-inference With the Weighted Likelihood Bootstrap
We introduce the weighted likelihood bootstrap (WLB) as a way to simulate approximately from a posterior distribution. This method is often easy to implement, requiring only an algorithm forExpand
Markov Chain Monte Carlo Methods for Computing Bayes Factors
The problem of calculating posterior probabilities for a collection of competing models and associated Bayes factors continues to be a formidable challenge for applied Bayesian statisticians. CurrentExpand
On Bayesian Model Assessment and Choice Using Cross-Validation Predictive Densities: Appendix
Short description of the prior and the MCMC specification details is given here. See (Neal, 1996, 1997, 1999; Lampinen & Vehtari, 2001) and the FBM software manual (Neal, 2000) for additionalExpand
Bayesian approach for neural networks--review and case studies
TLDR
In the most thoroughly analyzed regression problem, the best models were those with less restrictive priors, which emphasizes the major advantage of the Bayesian approach, that the authors are not forced to guess attributes that are unknown, such as the number of degrees of freedom in the model. Expand
Computing Bayes Factors by Combining Simulation and Asymptotic Approximations
Abstract The Bayes factor is a ratio of two posterior normalizing constants, which may be difficult to compute. We compare several methods of estimating Bayes factors when it is possible to simulateExpand
...
1
2
3
4
5
...