The Akaike information criterion: Background, derivation, properties, application, interpretation, and refinements

  title={The Akaike information criterion: Background, derivation, properties, application, interpretation, and refinements},
  author={Joseph E. Cavanaugh and Andrew A. Neath},
  journal={Wiley Interdisciplinary Reviews: Computational Statistics},
  • J. Cavanaugh, A. Neath
  • Published 14 March 2019
  • Computer Science
  • Wiley Interdisciplinary Reviews: Computational Statistics
The Akaike information criterion (AIC) is one of the most ubiquitous tools in statistical modeling. The first model selection criterion to gain widespread acceptance, AIC was introduced in 1973 by Hirotugu Akaike as an extension to the maximum likelihood principle. Maximum likelihood is conventionally applied to estimate the parameters of a model once the structure and dimension of the model have been formulated. Akaike's seminal idea was to combine into a single procedure the process of… 


This paper presents the state of the art of the statistical modelling as applied to plant breeding. Classes of inference, statistical models, estimation methods and model selection are emphasized in

Navigating the Statistical Minefield of Model Selection and Clustering in Neuroscience

This work discusses model selection and clustering techniques from a statistician’s point of view, revealing the assumptions behind, and the logic that governs the various approaches.

Models for autoregressive processes of bounded counts: How different are they?

The most popular approaches used for model identification, Akaike’s information criterion and the Bayesian information criterion are considered, and the properties of the fitted models obtained using maximum likelihood estimation are investigated.

Impact of Algorithm Selection on Modeling Ozone Pollution: A Perspective on Box and Tiao (1975)

This study has predicted ozone concentration in Los Angeles with an ARIMA and an autoregressive process, and shows that time series analysis should consider not only the model shape but also the model estimation, to ensure valid results.

Treatment response prediction: Is model selection unreliable?

Quantitative modelling has become an essential part of the drug development pipeline. In particular, pharmacokinetic and pharmacodynamic models are used to predict treatment responses in order to

Exponentiated Weibull Models Applied to Medical Data in Presence of Right-censoring, Cure Fraction and Covariates

Cure fraction models have been widely used to analyze survival data in which a proportion of the individuals isnot susceptible to the event of interest. This article considers frequentist and

Simple Models in Complex Worlds: Occam's Razor and Statistical Learning Theory

It is shown that situations exist for which a preference for simpler models (as modeled through the addition of a regularization term in the learning problem) provably slows down, instead of favoring, the supervised learning process.

Development of Effective Artificial Neural Network Model using Sequential Sensitivity Analysis and Randomized Training

The paper presents an effective model that focuses on the implementation of sequential sensitivity analysis and randomized training in an artificial neural network (ANN) for high dimensionality thermal power plant data and suggests the significance of training with randomized training with comparison-based qualitative reasoning.



Information Criteria and Statistical Modeling

A generalized information criterion (GIC) and a bootstrap information criterion are presented, which provide unified tools for modeling and model evaluation for a diverse range of models, including various types of nonlinear models and model estimation procedures such as robust estimation, the maximum penalized likelihood method and a Bayesian approach.

An improved Akaike information criterion for state-space model selection

Model selection and Akaike's Information Criterion (AIC): The general theory and its analytical extensions

During the last fifteen years, Akaike's entropy-based Information Criterion (AIC) has had a fundamental impact in statistical model evaluation problems. This paper studies the general theory of the

Bootstrapping Log Likelihood and EIC, an Extension of AIC

A new information criterion, EIC, is proposed which is constructed by employing the bootstrap method to simulate the datafluctuation and is regarded as an extension of AIC.

A new look at the statistical model identification

The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as

Model selection for extended quasi-likelihood models in small samples.

A small sample criterion (AICc) for the selection of extended quasi-likelihood models provides a more nearly unbiased estimator for the expected Kullback-Leibler information and often selects better models than AIC in small samples.

Improved estimators of Kullback-Leibler information for autoregressive model selection in small samples

SUMMARY A new estimator, AICI, of the Kullback-Leibler information is proposed for Gaussian autoregressive time series model selection. The expected information is decomposed into two terms, the


Estimation of Kullback-Leibler information is a crucial part of deriving a statistical model selection procedure which, like AIC, is based on the likelihood principle. To discriminate between nested

Generalised information criteria in model selection

SUMMARY The problem of evaluating the goodness of statistical models is investigated from an information-theoretic point of view. Information criteria are proposed for evaluating models constructed

Akaike's Information Criterion in Generalized Estimating Equations

  • W. Pan
  • Mathematics
  • 2001
This work proposes a modification to AIC, where the likelihood is replaced by the quasi-likelihood and a proper adjustment is made for the penalty term.