Model Comparisons and Model Selections Based on Generalization Criterion Methodology.

  title={Model Comparisons and Model Selections Based on Generalization Criterion Methodology.},
  author={Busemeyer and Wang},
  journal={Journal of mathematical psychology},
  volume={44 1},
  • Busemeyer, Wang
  • Published 1 March 2000
  • Mathematics
  • Journal of mathematical psychology
The purpose of this article is to formalize the generalization criterion method for model comparison. The method has the potential to provide powerful comparisons of complex and nonnested models that may also differ in terms of numbers of parameters. The generalization criterion differs from the better known cross-validation criterion in the following critical procedure. Although both employ a calibration stage to estimate parameters, cross-validation employs a replication sample from the same… 

Figures and Tables from this paper

Comparison of Decision Learning Models Using the Generalization Criterion Method
The results suggest that the models with the prospect utility function can make generalizable predictions to new conditions, and different learning models are needed for making short-versus long-term predictions on simple gambling tasks.
Key Concepts in Model Selection: Performance and Generalizability.
  • E M Forster
  • Biology
    Journal of mathematical psychology
  • 2000
It seems that simplicity and parsimony may be an additional factor in managing these errors, in which case the standard methods of model selection are incomplete implementations of Occam's razor.
A Survey of Model Evaluation Approaches With a Tutorial on Hierarchical Bayesian Methods
It is argued that hierarchical methods, generally, and hierarchical Bayesian methods, specifically, can provide a more thorough evaluation of models in the cognitive sciences.
From Anomalies to Forecasts: A Choice Prediction Competition for Decisions under Risk and Ambiguity
This paper aims to facilitate the derivation of models that can capture the classical choice anomalies and provide useful forecasts of decisions under risk and ambiguity. Study 1 replicates 14
Learning and Decision Model Selection for a Class of Complex Adaptive Systems
This article discusses a pragmatic method for selecting between classes of models that are designed to increase understanding in the most significant single factor behind the global climate change, namely human land-use.
THEORETICAL AND REVIEW ARTICLES Comparison of basic assumptions embedded in learning models for experience-based decision making
Basic assumptions embedded in learning models for predicting behavior in decisions based on experience are examined, finding the advantage of a class of models incorporating decay of previous experience, whereas the ranking of choice rules depended on the evaluation method used.
Visualizing The Implicit Model Selection Tradeoff
Methods for comparing predictive models in an interpretable manner are proposed that synthesize ideas from supervised learning, unsupervised learning, dimensionality reduction, and visualization and demonstrated how they can be used to inform the model selection process.
A Hierarchical Bayesian Approach to Assessing Between-Experiment Generalizability of Models of Risky Decision Making
This study introduces a hierarchical Bayesian approach, dubbed hierarchical generalization modeling (HGM), to assessing a model’s generalizability between experimental tasks, and demonstrates the soundness and feasibility of HGM for models of risky decision making with human data from decision-from-description and decision- from-experience experiments, and in simulations with artificial data.


Model selection criteria: an investigation of relative accuracy, posterior probabilities, and combinations of criteria
It is suggested that general model comparison, model selection, and model probability estimation be performed using the Schwarz criterion, which can be implemented given the model log likelihoods using only a hand calculator.
Cross-Validation in Regression and Covariance Structure Analysis
This article gives an overview of cross-validation techniques in regression and covariance structure analysis. The method of cross-validation offers a means for checking the accuracy or reliability
Cross-Validation Methods.
  • Browne
  • Environmental Science
    Journal of mathematical psychology
  • 2000
It is seen that the optimal number of parameters suggested by both single-sample and two-sample cross-validation indices will depend on sample size.
Applying Occam’s razor in modeling cognition: A Bayesian approach
This paper introduces a Bayesian model selection approach that formalizes Occam's razor, choosing the simplest model that describes the data well, and can be applied to the comparison of non-nested models as well as nested ones.
Akaike's Information Criterion and Recent Developments in Information Complexity.
  • Bozdogan
  • Computer Science
    Journal of mathematical psychology
  • 2000
This paper presents some recent developments on a new entropic or information complexity (ICOMP) criterion of Bozdogan for model selection and operationalizes the general form of ICOMP based on the quantification of the concept of overall model complexity in terms of the estimated inverse-Fisher information matrix.
Comparing strong and weak models by fitting them to computer-generated data
This paper illustrates the inferential problem posed by this second means of achieving good fit and demonstrates how appropriate control information can be gathered to put inferences from model-fitting on firmer ground.
Single Sample Cross-Validation Indices for Covariance Structures.
This article considers single sample approximations for the cross-validation coefficient in the analysis of covariance structures. An adjustment for predictive validity which may be employed in
Cross‐Validatory Choice and Assessment of Statistical Predictions
SUMMARY A generalized form of the cross-validation criterion is applied to the choice and assessment of prediction using the data-analytic concept of a prescription. The examples used to illustrate
Selectivity, scope, and simplicity of models: a lesson from fitting judgments of perceived depth.
The FLMP's successes may reflect not its sensitivity in capturing psychological process but its scope in fitting any data and its complexity as measured by equation length, which is based on exploration of integration models reflecting depth judgments.
I. Problems and Designs of Cross-Validation 1
THE term &dquo;cross-validation&dquo; is often loosely applied to any one of several distinct, though closely related, experimental designs. Before we get lost in a swamp of semantic confusion, it