Consistent High-Dimensional Bayesian Variable Selection via Penalized Credible Regions

@article{Bondell2012ConsistentHB,
  title={Consistent High-Dimensional Bayesian Variable Selection via Penalized Credible Regions},
  author={Howard D. Bondell and Brian J. Reich},
  journal={Journal of the American Statistical Association},
  year={2012},
  volume={107},
  pages={1610 - 1624}
}
  • H. Bondell, B. Reich
  • Published 14 August 2012
  • Computer Science
  • Journal of the American Statistical Association
For high-dimensional data, particularly when the number of predictors greatly exceeds the sample size, selection of relevant predictors for regression is a challenging problem. Methods such as sure screening, forward selection, or penalized regressions are commonly used. Bayesian variable selection methods place prior distributions on the parameters along with a prior over model space, or equivalently, a mixture prior on the parameters having mass at zero. Since exhaustive enumeration is not… 
High-dimensional variable selection via penalized credible regions with global-local shrinkage priors
The method of Bayesian variable selection via penalized credible regions separates model fitting and variable selection. The idea is to search for the sparsest solution within the joint posterior
Variable Selection via Penalized Credible Regions with Dirichlet–Laplace Global-Local Shrinkage Priors
TLDR
This paper incorporates global-local priors into the credible region selection framework of Bayesian variable selection, and introduces a new method to tune hyperparameters in prior distributions for linear regression.
Efficient Bayesian Regularization for Graphical Model Selection.
TLDR
This work proposes a novel graphical model selection approach for large dimensional settings where the dimension increases with the sample size, by decoupling model fitting and covariance selection, and applies it to a cancer genomics data example.
Bayesian Variable Selection Utilizing Posterior Probability Credible Intervals
TLDR
This work proposes a novel variable selection algorithm that utilizes the parameters credible intervals to select the variables to be kept in the model and shows this algorithm yields comparable or better results comparing to DIC in a simulation study, as well as implementing this algorithm in a real-world example.
High-Dimensional Posterior Consistency for Hierarchical Non-Local Priors in Regression
TLDR
This paper introduces a fully Bayesian approach with the pMOM nonlocal prior where an appropriate Inverse-Gamma prior is placed on the tuning parameter to analyze a more robust model that is comparatively immune to misspecification of scale parameter.
Priors for Bayesian Shrinkage and High-Dimensional Model Selection
TLDR
This dissertation investigates the asymptotic form of the marginal likelihood based on the nonlocal priors and shows that it attains a unique penalty term that adapts to the strength of signal corresponding variable in the model, and remark that this term cannot be attained from local priors such as Gaussian prior densities.
Bayesian variable selection with shrinking and diffusing priors
We consider a Bayesian approach to variable selection in the presence of high dimensional covariates based on a hierarchical model that places prior distributions on the regression coefficients as
Scalable Bayesian Variable Selection Using Nonlocal Prior Densities in Ultrahigh-dimensional Settings.
TLDR
It is found that Bayesian variable selection procedures based on nonlocal priors are competitive to all other procedures in a range of simulation scenarios, and this favorable performance is explained through a theoretical examination of their consistency properties.
Bayesian Sparse Global-Local Shrinkage Regression for Selection of Grouped Variables
TLDR
This paper presents a Bayesian grouped model with continuous global-local shrinkage priors to handle complex group hierarchies that include overlapping and multilevel group structures and provides an alternative degrees of freedom estimator for sparse Bayesian linear models that takes into account the effects of shrinkage on the model coefficients.
...
...

References

SHOWING 1-10 OF 58 REFERENCES
Bayesian Variable Selection in Clustering High-Dimensional Data
TLDR
This article formulate the clustering problem in terms of a multivariate normal mixture model with an unknown number of components and use the reversible-jump Markov chain Monte Carlo technique to define a sampler that moves between different dimensional spaces.
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
TLDR
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
APPROACHES FOR BAYESIAN VARIABLE SELECTION
This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS formulation
Calibration and empirical Bayes variable selection
For the problem of variable selection for the normal linear model, selection criteria such as AIC, C p , BIC and RIC have fixed dimensionality penalties. Such criteria are shown to correspond to
Bayes model averaging with selection of regressors
Summary. When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with
Adaptive Lasso for sparse high-dimensional regression models
TLDR
The adaptive Lasso has the oracle property even when the number of covariates is much larger than the sample size, and under a partial orthogonality condition in which the covariates with zero coefficients are weakly correlated with the covariate with nonzero coefficients, marginal regression can be used to obtain the initial estimator.
Variable selection via Gibbs sampling
Abstract A crucial problem in building a multiple regression model is the selection of predictors to include. The main thrust of this article is to propose and develop a procedure that uses
One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.
TLDR
A new unified algorithm based on the local linear approximation for maximizing the penalized likelihood for a broad class of concave penalty functions and shows that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators.
High-dimensional graphs and variable selection with the Lasso
TLDR
It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
Mixtures of g Priors for Bayesian Variable Selection
Zellner's g prior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this article we study mixtures of g priors as an
...
...