Model Selection confidence sets by likelihood ratio testing

@article{Zheng2019ModelSC,
  title={Model Selection confidence sets by likelihood ratio testing},
  author={Chao Zheng and Davide Ferrari and Yuhong Yang},
  journal={Statistica Sinica},
  year={2019}
}
The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets (MSCSs) based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a user-specified… 

Figures and Tables from this paper

Enhancing Multi-model Inference with Natural Selection

TLDR
The convergence properties of genetic algorithm (GA) are studied based on the Markov chain theory and used to design an adaptive termination criterion that vastly reduces the computational cost.

Confidence graphs for graphical model selection

TLDR
This article first identifies two nested graphical models—called small and large confidence graphs (SCG and LCG)—trapping the true graphical model in between at a given level of confidence, just like the endpoints of traditional confidence interval capturing the population parameter.

Simple measures of uncertainty for model selection

TLDR
Two simple measures of uncertainty for a model selection procedure are developed, similar in spirit to confidence set in parameter estimation; the second measure is focusing on error in model selection.

Assessing the Global and Local Uncertainty of Scientific Evidence in the Presence of Model Misspecification

TLDR
Non-parametric bootstrap methodologies for estimating the sampling distribution of the evidence estimator under model misspecification are developed, which allows us to determine how secure the authors are in their evidential statement.

Discussion on Prior-based Bayesian Information Criterion (PBIC) by M. J. Bayarri, James O. Berger, Woncheol Jang, Surajit Ray, Luis R. Pericchi, and Ingmar Visser

TLDR
This elucidating paper unpacked a dangerous complication when one takes the classic BIC verbatim as an approximation to themarginal likelihood, and proposed the Prior-based Bayesian Information Criterion (PBIC) as a principled correction.

THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL

TLDR
Inspired by the process of natural selection, GA performs genetic operations such as selection, crossover and mutation iteratively to update a collection of potential solutions (models) until convergence and an adaptive termination criterion is designed that vastly reduces the computational cost.

Visualization and assessment of model selection uncertainty

Order selection with confidence for finite mixture models

The determination of the number of mixture components (the order) of a finite mixture model has been an enduring problem in statistical inference. We prove that the closed testing principle leads to a

Ranking the importance of genetic factors by variable‐selection confidence sets

TLDR
This work addresses the ambiguity related to SNP selection by constructing a list of models—called a variable‐selection confidence set (VSCS)—which contains the collection of all well‐supported SNP combinations at a user‐specified confidence level.

References

SHOWING 1-10 OF 36 REFERENCES

The Model Confidence Set

TLDR
The paper revisits the inflation forecasting problem posed by Stock and Watson (1999), and compute the model confidence set (MCS) for their set of inflation forecasts, and compares a number of Taylor rule regressions to determine the MCS of the best in terms of in-sample likelihood criteria.

Confidence sets for model selection by F -testing

We introduce the notion of variable selection con dence set (VSCS) for linear regression based on F -testing. Our method identi es the most important variables in a principled way that goes beyond

An Application of Multiple Comparison Techniques to Model Selection

TLDR
Considering the sampling error of AIC, a set of good models is constructed rather than choosing a single model, called a confidence set of models, which includes the minimum ε{AIC} model at an error rate smaller than the specified significance level.

Model Selection and Model Averaging

Guarding from Spurious Discoveries in High Dimension

TLDR
A measure of goodness of spurious fit is defined, which shows how good a response variable can be fitted by an optimally selected subset of covariates under the null model, and a simple and effective LAMM algorithm is proposed to compute it.

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TLDR
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.

Nonconcave penalized likelihood with a diverging number of parameters

A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.

Robust Bounded-Influence Tests in General Parametric Models

Abstract We introduce robust tests for testing hypotheses in a general parametric model. These are robust versions of the Wald, scores, and likelihood ratio tests and are based on general M

Regression Shrinkage and Selection via the Lasso

TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

Exact and Approximate Stepdown Methods for Multiple Hypothesis Testing

Consider the problem of testing k hypotheses simultaneously. In this article we discuss finite- and large-sample theory of stepdown methods that provide control of the familywise error rate (FWE). To