Multimodel Inference

@article{Burnham2004MultimodelI,
  title={Multimodel Inference},
  author={Kenneth P. Burnham and David R. Anderson},
  journal={Sociological Methods \& Research},
  year={2004},
  volume={33},
  pages={261 - 304}
}
The model selection literature has been generally poor at reflecting the deep foundations of the Akaike information criterion (AIC) and at making appropriate comparisons to the Bayesian information criterion (BIC). There is a clear philosophy, a sound criterion based in information theory, and a rigorous statistical foundation for AIC. AIC can be justified as Bayesian using a “savvy” prior on models that is a function of sample size and the number of model parameters. Furthermore, BIC can be… Expand
Model weights and the foundations of multimodel inference.
TLDR
The usefulness of the weighted BIC (Bayesian information criterion) is suggested as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. Expand
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
TLDR
This study consists of a series of simulations to assess the utility of the proposed bootstrap approach in multigroup and mixture model comparisons and shows that bootstrap selection rates can provide additional information over and above simply relying on the size of AIC and BIC differences in a given sample. Expand
Information criteria: How do they behave in different models?
TLDR
The AIC, AICc and BIC penalize the likelihoods in order to select the simplest model, and the applications of these criteria are investigated in the selection of normal models, theselection of biological growth models and selection of time series models. Expand
Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence
TLDR
Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking, and for reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. Expand
Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective
TLDR
It is argued that the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting, and Bayesian arguments provide the sole justification for interpreting model weights as coherent (mathematically self consistent) model probabilities. Expand
Bayes Factors and Multimodel Inference
TLDR
Noting the sensitivity of Bayes factors to the choice of priors on parameters, this work defines and proposes nonpreferential priors as offering a reasonable standard for objective multimodel inference. Expand
Using Akaike’s information theoretic criterion in population
Akaike’s information-theoretic criterion for model discrimination (AIC) is often stated to “overfit”, i.e., it selects models with a higher dimension than the dimension of the model that generatedExpand
The relative performance of AIC, AICC and BIC in the presence of unobserved heterogeneity
TLDR
It is found that the relative predictive performance of model selection by different information criteria is heavily dependent on the degree of unobserved heterogeneity between data sets, and that the choice of information criterion should ideally be based upon hypothesized properties of the population of data sets from which a given data set could have arisen. Expand
Parametric or nonparametric? A parametricness index for model selection
In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection whenExpand
An evaluation of prior influence on the predictive ability of Bayesian model averaging
TLDR
It is demonstrated that parsimonious priors may be favorable over priors that favor complexity for making predictions, and BMA performed better than a best single-model approach independently of the prior model weight for 6 out of 16 species. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 82 REFERENCES
A Critique of the Bayesian Information Criterion for Model Selection
The Bayesian information criterion (BIC) has become a popular criterion for model selection in recent years. The BIC is intended to provide a measure of the weight of evidence favoring one model overExpand
Model selection and multimodel inference : a practical information-theoretic approach
The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference). A philosophy isExpand
Model selection for extended quasi-likelihood models in small samples.
TLDR
A small sample criterion (AICc) for the selection of extended quasi-likelihood models provides a more nearly unbiased estimator for the expected Kullback-Leibler information and often selects better models than AIC in small samples. Expand
Bayesian Model Selection in Social Research
It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independentExpand
Bayesian measures of model complexity and fit
TLDR
The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. Expand
Generalizing the derivation of the schwarz information criterion
The Schwarz information criterion (SIC, BTC, SBC) is one of the most widely known and used tools in statistical model selection. The criterion was derived by Schwarz (1978) to serve as an asymptoticExpand
Approximate Bayes factors and accounting for model uncertainty in generalised linear models
SUMMARY Ways of obtaining approximate Bayes factors for generalised linear models are described, based on the Laplace method for integrals. We propose a new approximation which uses only the outputExpand
Bayesian model averaging: a tutorial (with comments by M. Clyde, David Draper and E. I. George, and a rejoinder by the authors
TLDR
Bayesian model averaging (BMA) provides a coherent mechanism for ac- counting for this model uncertainty and provides improved out-of- sample predictive performance. Expand
Predictive Variable Selection in Generalized Linear Models
Here we extend predictive method for model selection of Laud and Ibrahim to the generalized linear model. This prescription avoids the need to directly specify prior probabilities of models and priorExpand
Key Concepts in Model Selection: Performance and Generalizability.
  • E M Forster
  • Mathematics, Medicine
  • Journal of mathematical psychology
  • 2000
TLDR
It seems that simplicity and parsimony may be an additional factor in managing these errors, in which case the standard methods of model selection are incomplete implementations of Occam's razor. Expand
...
1
2
3
4
5
...