Akaike's Information Criterion, Cp and Estimators of Loss for Elliptically Symmetric Distributions

  title={Akaike's Information Criterion, Cp and Estimators of Loss for Elliptically Symmetric Distributions},
  author={Aur{\'e}lie Boisbunon and St{\'e}phane Canu and Dominique Fourdrinier and William E. Strawderman and Martin T. Wells},
  journal={International Statistical Review},
  pages={422 - 439}
In this article, we develop a modern perspective on Akaike's information criterion and Mallows's Cp for model selection, and propose generalisations to spherically and elliptically symmetric distributions. Despite the differences in their respective motivation, Cp and Akaike's information criterion are equivalent in the special case of Gaussian linear regression. In this case, they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from… 

Loss and Confidence Level Estimation

Suppose X is an observation from a distribution Pθ parameterized by an unknown parameter θ. In classical decision theory, after selecting an estimation procedure φ(X) of θ, it is typical to evaluate

Inadmissibility of the corrected Akaike information criterion

For the multivariate linear regression model with unknown covariance, the corrected Akaike information criterion is the minimum variance unbiased estimator of the expected Kullback–Leibler

A Scalable Empirical Bayes Approach to Variable Selection in Generalized Linear Models

A new empirical Bayes approach to variable selection in the context of generalized linear models is developed, using a generalized alternating maximization algorithm which is scalable, and leads to significantly faster convergence compared with simulation-based fully Bayesian methods.

Prediction of Linear Models: Application of Jackknife Model Averaging

When using linear models, a common practice is to find the single best model fit used in predictions. This on the other hand can cause potential problems such as misspecification and sometimes even

Model Order Selection From Noisy Polynomial Data Without Using Any Polynomial Coefficients

It is experimentally observed that the root-mean square prediction errors and the variation of the RMS prediction errors appear to scale linearly with the standard deviations of the noise for each degree of a polynomial.

A Primer for Model Selection: The Decisive Role of Model Complexity

A classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation, is proposed and guidance on choosing the right type of criteria for specific model selection tasks is provided.

Uma estratégia automatizada de investimento por meio de redes neurais artificiais e preditores econométricos [An automated investment strategy using artificial neural networks and econometric predictors]

An automated strategy (investor robot) that combines predictions made by artificial neural networks and econometric predictors in a second neural network, this acts like a ensemble to generate purchase or sell signals through a negotiation model built into the algorithm.



Estimation of a Loss Function for Spherically Symmetric Distributions in the General Linear Model

This paper is concerned with estimating the loss of a point estimator when sampling from a spherically symmetric distribution. We examine the canonical setting of a general linear model where the

On Bayes and unbiased estimators of loss

We consider estimation of loss for generalized Bayes or pseudo-Bayes estimators of a multivariate normal mean vector, θ. In 3 and higher dimensions, the MLEX is UMVUE and minimax but is inadmissible.

Robust generalized Bayes minimax estimators of location vectors for spherically symmetric distributions with unknown scale

: We consider estimation of the mean vector, θ , of a spherically sym- metric distribution with unknown scale parameter σ under scaled quadratic loss. We show minimaxity of generalized Bayes


For the problem of estimating a regression function, μ say, subject to shape constraints, like monotonicity or convexity, it is argued that the divergence of the maximum likelihood estimator provides

Comparison of Model Selection for Regression

The results demonstrate the practical advantages of VC-based model selection; it consistently outperforms AIC for all data sets and proposes a new practical estimate of model complexity for k-nearest neighbors regression.

On Improved Loss Estimation for Shrinkage Estimators

Let X be a random vector with distribution Pθ where θ is an unknown parameter. When estimating θ by some estimator φ(X) under a loss function L(θ, φ), classical decision theory advocates that such a

The Estimation of Prediction Error

A Rao–Blackwell type of relation is derived in which nonparametric methods such as cross-validation are seen to be randomized versions of their covariance penalty counterparts.