Posterior asymptotic normality for an individual coordinate in high-dimensional linear regression

@article{Yang2019PosteriorAN,
  title={Posterior asymptotic normality for an individual coordinate in high-dimensional linear regression},
  author={Dana Yang},
  journal={Electronic Journal of Statistics},
  year={2019}
}
  • Dana Yang
  • Published 9 April 2017
  • Mathematics, Computer Science
  • Electronic Journal of Statistics
We consider the sparse high-dimensional linear regression model $Y=Xb+\epsilon$ where $b$ is a sparse vector. For the Bayesian approach to this problem, many authors have considered the behavior of the posterior distribution when, in truth, $Y=X\beta+\epsilon$ for some given $\beta$. There have been numerous results about the rate at which the posterior distribution concentrates around $\beta$, but few results about the shape of that posterior distribution. We propose a prior distribution for… 
Bayesian inference in high-dimensional models
TLDR
These properties of Bayesian and related methods for several high-dimensional models such as many normal means problem, linear regression, generalized linear models, Gaussian and non-Gaussian graphical models are reviewed.
Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors
TLDR
Under the sub-Gaussianity assumption on the true score function, strong model selection consistency for regression coefficients are obtained, which eventually asserts the frequentist's validity of credible sets.
Bayesian sparse linear regression with unknown symmetric error
TLDR
Bayesian procedures for sparse linear regression when the unknown error distribution is endowed with a non-parametric prior are studied, and a symmetrized Dirichlet process mixture of Gaussian prior on the error density is put.

References

SHOWING 1-10 OF 13 REFERENCES
Confidence intervals for low dimensional parameters in high dimensional linear models
TLDR
The method proposed turns the regression data into an approximate Gaussian sequence of point estimators of individual regression coefficients, which can be used to select variables after proper thresholding, and demonstrates the accuracy of the coverage probability and other desirable properties of the confidence intervals proposed.
A general framework for Bayes structured linear models
TLDR
This paper provides a unified approach to both Bayes high dimensional statistics and Bayes nonparametrics in a general framework of structured linear models with the proposed two-step model selection prior, and proves a general theorem of posterior contraction under an abstract setting.
BAYESIAN LINEAR REGRESSION WITH SPARSE PRIORS
TLDR
Under compatibility conditions on the design matrix, the posterior distribution is shown to contract at the optimal rate for recovery of the unknown sparse vector, and to give optimal prediction of the response vector.
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
The Dantzig selector: Statistical estimation when P is much larger than n
TLDR
The main point of the paper is accurate statistical estimation in high dimensions, which includes theoretical, practical and computational issues, and the Dantzig Selector compares with the Lasso.
A User's Guide to Measure-Theoretic Probability
TLDR
The authors’ theory of estimation is based on a geometric approach and seems closely related to Ž ducial intervals as developed in several articles by Neyman, and may be looked upon as a first step in reconciling classical statistics with Bayes statistics.
SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR
We show that, under a sparsity scenario, the Lasso estimator and the Dantzig selector exhibit similar behavior. For both methods, we derive, in parallel, oracle inequalities for the prediction risk
Restricted Eigenvalue Conditions on Subgaussian Random Matrices
TLDR
This paper associates the RE condition (Bickel-Ritov-Tsybakov 09) with the complexity of a subset of the sphere in R p, and shows that a class of random matrices with independent rows, but not necessarily independent columns, satisfy theRE condition, when the sample size is above a certain lower bound.
Uniform Uncertainty Principle for Bernoulli and Subgaussian Ensembles
TLDR
The paper considers random matrices with independent subgaussian columns and provides a new elementary proof of the Uniform Uncertainty Principle for such matrices and combines a simple measure concentration and a covering argument, which are standard tools of high-dimensional convexity.
...
...