Posterior consistency in linear models under shrinkage priors

  title={Posterior consistency in linear models under shrinkage priors},
  author={Artin Armagan and David B. Dunson and Jaeyong Lee and Waheed Uz Zaman Bajwa and Nate Strawn},
We investigate the asymptotic behaviour of posterior distributions of regression coefficients in high-dimensional linear models as the number of dimensions grows with the number of observations. We show that the posterior distribution concentrates in neighbourhoods of the true parameter under simple sufficient conditions. These conditions hold under popular shrinkage priors given some sparsity assumptions. Copyright 2013, Oxford University Press. 
High-dimensional multivariate posterior consistency under global-local shrinkage priors
This paper derives sufficient conditions for posterior consistency under the Bayesian multivariate linear regression framework and proves that the method achieves posterior consistency even when p>n and even whenp grows at nearly exponential rate with the sample size. Expand
Ultra high-dimensional multivariate posterior contraction rate under shrinkage priors
In recent years, shrinkage priors have received much attention in high-dimensional data analysis from a Bayesian perspective. Compared with widely used spike-and-slab priors, shrinkage priors haveExpand
Contraction properties of shrinkage priors in logistic regression
Abstract Bayesian shrinkage priors have received a lot of attention recently because of their efficiency in computation and accuracy in estimation and variable selection. In this paper, we study theExpand
Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors
We consider a sparse linear regression model with unknown symmetric error under the high-dimensional setting. The true error distribution is assumed to belong to the locally $\beta$-Holder class withExpand
Nearly optimal Bayesian Shrinkage for High Dimensional Regression
During the past decade, shrinkage priors have received much attention in Bayesian analysis of high-dimensional data. In this paper, we study the problem for high-dimensional linear regression models.Expand
High-dimensional variable selection via penalized credible regions with global-local shrinkage priors
The method of Bayesian variable selection via penalized credible regions separates model fitting and variable selection. The idea is to search for the sparsest solution within the joint posteriorExpand
Bayes Variable Selection in Semiparametric Linear Models
This work proposes a semiparametric g-prior which incorporates an unknown matrix of cluster allocation indicators and Bayes’ factor and variable selection consistency is shown to result under a class of proper priors on g even when the number of candidate predictors p is allowed to increase much faster than sample size n, while making sparsity assumptions on the true model size. Expand
High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models
A VAR model with two prior choices for the autoregressive coefficient matrix is considered: a nonhierarchical matrix-normal prior and a hierarchical prior, which corresponds to an arbitrary scale mixture of normals, which establishes posterior consistency for both these priors under standard regularity assumptions. Expand
Fully Bayesian Penalized Regression with a Generalized Bridge Prior
This work proposes a fully Bayesian approach that incorporates both sparse and dense settings and shows how to use a type of model averaging approach to eliminate the nuisance penalty parameters and perform inference through the marginal posterior distribution of the regression coefficients. Expand
Data augmentation for non-Gaussian regression models using variance-mean mixtures
We use the theory of normal variance-mean mixtures to derive a data-augmentation scheme for a class of common regularization problems. This generalizes existing theory on normal variance mixtures forExpand


Asymptotic normality of posterior distributions in high-dimensional linear models
We study consistency and asymptotic normality of posterior distributions of the regression coefficient in a linear model when the dimension of the parameter grows with increasing sample size. UnderExpand
Inference with normal-gamma prior distributions in regression problems
This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunkenExpand
Asymptotics for lasso-type estimators
We consider the asymptotic behavior of regression estimators that minimize the residual sum of squares plus a penalty proportional to Σ ∥β j ∥γ for some y > 0. These estimators include the Lasso as aExpand
The properties of the maximum a posteriori estimator are investigated, as sparse estimation plays an important role in many problems, connections with some well-established regularization procedures are revealed, and some asymptotic results are shown. Expand
Bernstein von Mises Theorems for Gaussian Regression with increasing number of regressors
This paper brings a contribution to the Bayesian theory of nonparametric and semiparametric estimation. We are interested in the asymptotic normality of the posterior distribution in Gaussian linearExpand
Generalized Beta Mixtures of Gaussians
A new class of normal scale mixtures is proposed through a novel generalized beta distribution that encompasses many interesting priors as special cases and develops a class of variational Bayes approximations that will scale more efficiently to the types of truly massive data sets that are now encountered routinely. Expand
Bayesian lasso regression
The lasso estimate for linear regression corresponds to a posterior mode when independent, double-exponential prior distributions are placed on the regression coefficients. This paper introduces newExpand
Mixtures of g Priors for Bayesian Variable Selection
Zellner's g prior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this article we study mixtures of g priors as anExpand
The Bayesian Lasso
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.Expand
Variational Bridge Regression
  • A. Armagan
  • Mathematics, Computer Science
  • 2009
Results suggest that the proposed method yields an estimator that performs significantly better in sparse underlying setups than the existing state-of-the-art procedures in both n > p and p > n scenarios. Expand