# Model Selection confidence sets by likelihood ratio testing

@article{Zheng2019ModelSC, title={Model Selection confidence sets by likelihood ratio testing}, author={Chao Zheng and Davide Ferrari and Yuhong Yang}, journal={Statistica Sinica}, year={2019} }

The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets (MSCSs) based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a user-specified…

## 9 Citations

### Enhancing Multi-model Inference with Natural Selection

- Computer Science
- 2019

The convergence properties of genetic algorithm (GA) are studied based on the Markov chain theory and used to design an adaptive termination criterion that vastly reduces the computational cost.

### Confidence graphs for graphical model selection

- Mathematics, Computer ScienceStat. Comput.
- 2021

This article first identifies two nested graphical models—called small and large confidence graphs (SCG and LCG)—trapping the true graphical model in between at a given level of confidence, just like the endpoints of traditional confidence interval capturing the population parameter.

### Simple measures of uncertainty for model selection

- Business
- 2020

Two simple measures of uncertainty for a model selection procedure are developed, similar in spirit to confidence set in parameter estimation; the second measure is focusing on error in model selection.

### Assessing the Global and Local Uncertainty of Scientific Evidence in the Presence of Model Misspecification

- EconomicsFrontiers in Ecology and Evolution
- 2021

Non-parametric bootstrap methodologies for estimating the sampling distribution of the evidence estimator under model misspecification are developed, which allows us to determine how secure the authors are in their evidential statement.

### Discussion on Prior-based Bayesian Information Criterion (PBIC) by M. J. Bayarri, James O. Berger, Woncheol Jang, Surajit Ray, Luis R. Pericchi, and Ingmar Visser

- Computer ScienceStatistical Theory and Related Fields
- 2019

This elucidating paper unpacked a dangerous complication when one takes the classic BIC verbatim as an approximation to themarginal likelihood, and proposed the Prior-based Bayesian Information Criterion (PBIC) as a principled correction.

### THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL

- Computer Science
- 2019

Inspired by the process of natural selection, GA performs genetic operations such as selection, crossover and mutation iteratively to update a collection of potential solutions (models) until convergence and an adaptive termination criterion is designed that vastly reduces the computational cost.

### Visualization and assessment of model selection uncertainty

- Computational Statistics & Data Analysis
- 2022

### Order selection with confidence for finite mixture models

- Mathematics
- 2021

The determination of the number of mixture components (the order) of a ﬁnite mixture model has been an enduring problem in statistical inference. We prove that the closed testing principle leads to a…

### Ranking the importance of genetic factors by variable‐selection confidence sets

- BiologyJournal of the Royal Statistical Society: Series C (Applied Statistics)
- 2019

This work addresses the ambiguity related to SNP selection by constructing a list of models—called a variable‐selection confidence set (VSCS)—which contains the collection of all well‐supported SNP combinations at a user‐specified confidence level.

## References

SHOWING 1-10 OF 36 REFERENCES

### The Model Confidence Set

- Economics, Mathematics
- 2010

The paper revisits the inflation forecasting problem posed by Stock and Watson (1999), and compute the model confidence set (MCS) for their set of inflation forecasts, and compares a number of Taylor rule regressions to determine the MCS of the best in terms of in-sample likelihood criteria.

### Confidence sets for model selection by F -testing

- Mathematics
- 2015

We introduce the notion of variable selection con dence set (VSCS) for linear regression based on F -testing. Our method identi es the most important variables in a principled way that goes beyond…

### An Application of Multiple Comparison Techniques to Model Selection

- Computer Science
- 1998

Considering the sampling error of AIC, a set of good models is constructed rather than choosing a single model, called a confidence set of models, which includes the minimum ε{AIC} model at an error rate smaller than the specified significance level.

### Guarding from Spurious Discoveries in High Dimension

- Computer Science
- 2015

A measure of goodness of spurious fit is defined, which shows how good a response variable can be fitted by an optimally selected subset of covariates under the null model, and a simple and effective LAMM algorithm is proposed to compute it.

### Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

- Mathematics, Computer Science
- 2001

In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.

### Nonconcave penalized likelihood with a diverging number of parameters

- Mathematics
- 2004

A class of variable selection procedures for parametric models via nonconcave penalized likelihood was proposed by Fan and Li to simultaneously estimate parameters and select important variables.…

### Robust Bounded-Influence Tests in General Parametric Models

- Mathematics
- 1994

Abstract We introduce robust tests for testing hypotheses in a general parametric model. These are robust versions of the Wald, scores, and likelihood ratio tests and are based on general M…

### Regression Shrinkage and Selection via the Lasso

- Computer Science
- 1996

A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

### Exact and Approximate Stepdown Methods for Multiple Hypothesis Testing

- Mathematics
- 2003

Consider the problem of testing k hypotheses simultaneously. In this article we discuss finite- and large-sample theory of stepdown methods that provide control of the familywise error rate (FWE). To…