Learn More
It is well known that AIC and BIC have different properties in model selection. BIC is consistent in the sense that if the true model is among the candidates, the probability of selecting the true model approaches 1. On the other hand, AIC is minimax-rate optimal for both parametric and nonparametric cases for estimating the regression function. There are(More)
Adaptation over diierent procedures is of practical importance. Diierent procedures perform well under diierent conditions. In many practical situations, it is rather hard to assess which conditions are (approximately) satissed so as to identify the best procedure for the data at hand. Thus automatic adaptation over various scenarios is desirable. A(More)
Various discriminant methods have been applied for classification of tumors based on gene expression profiles, among which the nearest neighbor (NN) method has been reported to perform relatively well. Usually cross-validation (CV) is used to select the neighbor size as well as the number of variables for the NN method. However, CV can perform poorly when(More)
Statistical models (e.g., ARIMA models) have been commonly used in time series data analysis and forecasting. Typically one model is selected based on a selection criterion (e.g., AIC), hypothesis testing, and/or graphical inspections. The selected model is then used to forecast future values. However, model selection is often unstable and may cause an(More)
Risk bounds are derived for regression estimation based on model selection over a unrestricted number of models. While a large list of models provides more exibility, sig-niicant selection bias may occur with bias-correction based model selection criteria like AIC. We incorporate a model complexity penalty term in AIC to handle the selection bias. Resulting(More)
This paper looks into the issue of evaluating forecast accuracy measures. In the theoretical direction, for comparing two forecasters, only when the errors are stochastically ordered, the ranking of the forecasts is basically independent of the form of the chosen measure. We propose well-motivated Kullback-Leibler Divergence based accuracy measures. In the(More)