The Focussed Information Criterion

  title={The Focussed Information Criterion},
  author={Gerda Claeskens and Nils Lid Hjort},
A variety of model selection criteria have been developed, of general and specific types. Most of these aim at selecting a single model with good overall properties, e.g. formulated via average prediction quality or shortest estimated overall distance to the in some sense true model. The Akaike, the Bayesian and the deviance information criteria AIC, BIC, DIC, along with many suitable variations, are eminent examples of such methods, and are in frequent use. These methods are however not… 

Figures and Tables from this paper

Covariate selection and model averaging in semiparametric estimation of treatment effects
In the practice of program evaluation, choosing the covariates and the functional form of the propensity score is an important choice for estimating treatment effects. This paper proposes data-driven
Non-convex penalized estimation for the AR process
It is proved that the penalized estimators achieve some standard theoretical properties such as weak and strong oracle properties which have been proved in sparse linear regression framework.
SFB 823 Focused model selection in quantile regression
We consider the problem of model selection for quantile regression analysis where a particular purpose of the modeling procedure has to be taken into account. Typical examples include estimation of
Focused Model Selection in Quantile Regression
We consider the problem of model selection for quantile regression analysis where a particular purpose of the modeling procedure has to be taken into account. Typical examples include estimation of
Prestructuring multilayer perceptrons based on information-theoretic modeling of a partido-alto-based grammar for afro-brazilian music: enhanced generalization and principles of parsimony, including an investigation of statistical paradigms
The present study shows that prestructuring based on domain knowledge leads to statistically significant generalization-performance improvement in artificial neural networks (NNs) of the multilayer
The ability of RA to model an intricate and culturally specific musical construct in terms of discrete note events and their Interpreting RA Models of Note-onset Interactions 2 interactions in such a way as to mirror a human understanding of the corresponding musical practice is demonstrated.


Model selection and multimodel inference : a practical information-theoretic approach
The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference). A philosophy is
Frequentist Model Average Estimators
The traditional use of model selection methods in practice is to proceed as if the final selected model had been chosen in advance, without acknowledging the additional uncertainty introduced by
Adaptive Model Selection
Most model selection procedures use a fixed penalty penalizing an increase in the size of a model. These nonadaptive selection procedures perform well only in one type of situation. For instance,
Bayesian measures of model complexity and fit
The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages.
Goodness of Fit via Non‐parametric Likelihood Ratios
Abstract.  To test if a density f is equal to a specified f0, one knows by the Neyman–Pearson lemma the form of the optimal test at a specified alternative f1. Any non‐parametric density estimation
Model uncertainty, data mining and statistical inference
The effects of model uncertainty, such as too narrow prediction intervals, and the non-trivial biases in parameter estimates which can follow data-based modelling are reviewed.
Model selection for extended quasi-likelihood models in small samples.
A small sample criterion (AICc) for the selection of extended quasi-likelihood models provides a more nearly unbiased estimator for the expected Kullback-Leibler information and often selects better models than AIC in small samples.
Calibration and Empirical Bayes Variable Selection
For the problem of variable selection for the normal linear model, selection criteria such as AIC, Cp, BIC and RIC have fixed dimensionality penalties. Such criteria are shown to correspond to
Testing the Fit of a Parametric Function
Abstract General methods for testing the fit of a parametric function are proposed. The idea underlying each method is to “accept” the prescribed parametric model if and only if it is chosen by a
Testing lack of fit in multiple regression
SUMMARY We study lack-of-fit tests based on orthogonal series estimators. A common feature of these tests is that they are functions of score statistics that employ data-driven model dimensions. The