Scalable Bayesian Regression in High Dimensions With Multiple Data Sources

@article{Perrakis2019ScalableBR,
  title={Scalable Bayesian Regression in High Dimensions With Multiple Data Sources},
  author={Konstantinos Perrakis and Sach Mukherjee and the Alzheimer’s Disease Neuroimaging Initiative},
  journal={Journal of Computational and Graphical Statistics},
  year={2019},
  volume={29},
  pages={28 - 39}
}
Abstract Applications of high-dimensional regression often involve multiple sources or types of covariates. We propose methodology for this setting, emphasizing the “wide data” regime with large total dimensionality p and sample size . We focus on a flexible ridge-type prior with shrinkage levels that are specific to each data type or source and that are set automatically by empirical Bayes. All estimation, including setting of shrinkage levels, is formulated mainly in terms of inner product… 

High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking

TLDR
A large-scale comparison of penalized regression methods is presented, with no unambiguous winner across all scenarios or goals, even in this restricted setting where all data align well with the assumptions underlying the methods.

Fast Cross-validation for Multi-penalty High-dimensional Ridge Regression

TLDR
A flexible framework that facilitates multiple types of response, unpenalized covariates, several performance criteria and repeated CV is developed, including a computationally very efficient formula for the multi-penalty, sample-weighted hat-matrix, as used in the IWLS algorithm.

Fast cross-validation for multi-penalty ridge regression

TLDR
A very flexible framework that includes prediction of several types of response, allows for unpenalized covariates, can optimize several performance criteria and implements repeated CV is developed.

Feature-space selection with banded ridge regression

TLDR
This paper proposes a method to decompose over feature spaces the variance explained by a banded ridge regression model, and describes how banding ridge regression performs a feature-space selection, effectively ignoring non-predictive and redundant feature spaces.

References

SHOWING 1-10 OF 40 REFERENCES

Consistent High-Dimensional Bayesian Variable Selection via Penalized Credible Regions

TLDR
This work proposes a conjugate prior only on the full model parameters and use sparse solutions within posterior credible regions to perform selection, and shows that these sparse solutions can be computed via existing algorithms.

An Information Matrix Prior for Bayesian Analysis in Generalized Linear Models with High Dimensional Data.

TLDR
A novel specification for a general class of prior distributions, called Information Matrix (IM) priors, for high-dimensional generalized linear models, based on a broad generalization of Zellner's g-prior for Gaussian linear models is developed.

A Sparse-Group Lasso

TLDR
A regularized model for linear regression with ℓ1 andℓ2 penalties is introduced and it is shown that it has the desired effect of group-wise and within group sparsity.

Penalized regression, standard errors, and Bayesian lassos

TLDR
The performance of the Bayesian lassos is compared to their fre- quentist counterparts using simulations, data sets that previous lasso papers have used, and a di-cult modeling problem for predicting the collapse of governments around the world.

Inference with normal-gamma prior distributions in regression problems

This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TLDR
In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.

Model uncertainty and variable selection in Bayesian lasso regression

TLDR
This paper describes how the marginal likelihood can be accurately computed when the number of predictors in the model is not too large, allowing for model space enumeration when the total number of possible predictors is modest.

GENERALIZED DOUBLE PARETO SHRINKAGE.

TLDR
The properties of the maximum a posteriori estimator are investigated, as sparse estimation plays an important role in many problems, connections with some well-established regularization procedures are revealed, and some asymptotic results are shown.

On Bayesian lasso variable selection and the specification of the shrinkage parameter

TLDR
A Bayesian implementation of the lasso regression that accomplishes both shrinkage and variable selection through Bayes factors that evaluate the inclusion of each covariate in the model formulation is proposed.

Sparsity and smoothness via the fused lasso

TLDR
The fused lasso is proposed, a generalization that is designed for problems with features that can be ordered in some meaningful way, and is especially useful when the number of features p is much greater than N, the sample size.