Bridging AIC and BIC: A New Criterion for Autoregression

@article{Ding2018BridgingAA,
  title={Bridging AIC and BIC: A New Criterion for Autoregression},
  author={Jie Ding and Vahid Tarokh and Yuhong Yang},
  journal={IEEE Transactions on Information Theory},
  year={2018},
  volume={64},
  pages={4024-4043}
}
To address order selection for an autoregressive model fitted to time series data, we propose a new information criterion. It has the benefits of the two well-known model selection techniques: the Akaike information criterion and the Bayesian information criterion. When the data are generated from a finite-order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion. When the true order is infinity or suitably high with respect to the sample… Expand
Consistent model selection criteria and goodness-of-fit test for affine causal processes
This paper studies the model selection problem in a large class of causal time series models, which includes both the ARMA or AR(∞) processes, as well as the GARCH or ARCH(∞), APARCH, ARMA-GARCH andExpand
Order selection for possibly infinite-order non-stationary time series
TLDR
This study proposes a two-stage information criterion (TSIC), and shows that TSIC is asymptotically efficient in predicting integrated AR models when the underlying AR coefficients satisfy a wide range of conditions. Expand
Controlling the error probabilities of model selection information criteria using bootstrapping
TLDR
The Error Control for Information Criteria (ECIC) method is presented, a bootstrap approach to controlling Type-I error using Difference of Goodness of Fit (DGOF) distributions. Expand
On Statistical Efficiency in Learning
TLDR
A generalized notion of Takeuchi’s information criterion is proposed and it is proved that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions. Expand
Performance Evaluation of AIC and BIC in Time Series Clustering with Piccolo Method
TLDR
Results shows that Bayesian information Criterion (BIC) is better than Akaike’s information Criteria (AIC) in time series clustering with Piccolo method. Expand
Asymptotic analysis of model selection criteria for general hidden Markov models
The paper obtains analytical results for the asymptotic properties of Model Selection Criteria -- widely used in practice -- for a general family of hidden Markov models (HMMs), thereby substantiallyExpand
Sensitivity and specificity of information criteria
TLDR
In some cases the comparison of two models using ICs can be viewed as equivalent to a likelihood ratio test, with the different criteria representing different alpha levels and BIC being a more conservative test than AIC. Expand
Sensitivity and Specificity of Information Criteria
TLDR
In some cases the comparison of two models using ICs can be viewed as equivalent to a likelihood ratio test, with the different criteria representing different alpha levels and BIC being a more conservative test than AIC. Expand
A Penalized Method for the Predictive Limit of Learning
TLDR
This paper studies a penalized model selection technique that asymptotically achieves the optimal expected prediction loss (referred to as the limit of learning) offered by a set of candidate models. Expand
Analysis of Multistate Autoregressive Models
TLDR
This paper proposes an inference strategy that enables reliable and efficient offline analysis of this class of time series, referred to as multistate autoregressive models, and provides theoretical results and algorithms in order to facilitate the inference procedure. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 46 REFERENCES
Order selection for same-realization predictions in autoregressive processes
Assume that observations are generated from an infinite-order autoregressive [AR(∞)] process. Shibata [Ann. Statist. 8 (1980) 147-164] considered the problem of choosing a finite-order AR model,Expand
PREDICTION-FOCUSED MODEL SELECTION FOR AUTOREGRESSIVE MODELS: MODEL SELECTION IN AUTOREGRESSIVE MODELS
Summary In order to make predictions of future values of a time series, one needs to specify a forecasting model. A popular choice is an autoregressive time-series model, for which the order ofExpand
Parametric or nonparametric? A parametricness index for model selection
In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection whenExpand
PREDICTION/ESTIMATION WITH SIMPLE LINEAR MODELS: IS IT REALLY THAT SIMPLE?
Consider the simple normal linear regression model for estimation/prediction at a new design point. When the slope parameter is not obviously nonzero, hypothesis testing and information criteria canExpand
Catching up faster by switching sooner: a predictive approach to adaptive estimation with an application to the AIC–BIC dilemma
Summary.  Prediction and estimation based on Bayesian model selection and model averaging, and derived methods such as the Bayesian information criterion BIC, do not always converge at the fastestExpand
Finite sample criteria for autoregressive order selection
  • P. Broersen
  • Mathematics, Computer Science
  • IEEE Trans. Signal Process.
  • 2000
TLDR
The special finite sample information criterion and combined information criterion are necessary because of the increase of the variance of the residual energy for higher model orders that has not been accounted for in other criteria. Expand
Regression and time series model selection in small samples
SUMMARY A bias correction to the Akaike information criterion, AIC, is derived for regression and autoregressive time series models. The correction is of particular use when the sample size is small,Expand
The Focused Information Criterion
A variety of model selection criteria have been developed, of general and specific types. Most of these aim at selecting a single model with good overall properties, for example, formulated viaExpand
Multimodel Inference
The model selection literature has been generally poor at reflecting the deep foundations of the Akaike information criterion (AIC) and at making appropriate comparisons to the Bayesian informationExpand
Can the Strengths of AIC and BIC Be Shared
It is well known that AIC and BIC have dierent properties in model selection. BIC is consistent in the sense that if the true model is among the candidates, the probability of selecting the trueExpand
...
1
2
3
4
5
...