The reciprocal Bayesian LASSO.

@article{Mallick2021TheRB,
  title={The reciprocal Bayesian LASSO.},
  author={Himel Mallick and Rahim Alhamzawi and Vladimir Svetnik},
  journal={Statistics in medicine},
  year={2021}
}
A reciprocal LASSO (rLASSO) regularization employs a decreasing penalty function as opposed to conventional penalization approaches that use increasing penalties on the coefficients, leading to stronger parsimony and superior model selection relative to traditional shrinkage methods. Here we consider a fully Bayesian formulation of the rLASSO problem, which is based on the observation that the rLASSO estimate for linear regression parameters can be interpreted as a Bayesian posterior mode… Expand
2 Citations

Figures and Tables from this paper

The reciprocal Bayesian bridge for left-censored data
  • Rahim Alhamzawi
  • Mathematics
  • Communications in Statistics - Simulation and Computation
  • 2021
Bayesian reciprocal LASSO quantile regression
The reciprocal LASSO estimate for linear regression corresponds to a posterior mode when independent inverse Laplace priors are assigned on the regression coefficients. This paper studies reciproca...

References

SHOWING 1-10 OF 65 REFERENCES
The Bayesian Bridge
TLDR
The Bayesian bridge model outperforms its classical cousin in estimation and prediction across a variety of data sets, both simulated and real and the Markov chain Monte Carlo algorithm for fitting the bridge model exhibits excellent mixing properties, particularly for the global scale parameter. Expand
Nonlocal Priors for High-Dimensional Estimation
TLDR
The constructive representation of NLPs as mixtures of truncated distributions that enables simple posterior sampling and extending NLPs beyond previous proposals are outlined, showing that selection priors may actually be desirable for high-dimensional estimation. Expand
Decoupling Shrinkage and Selection in Bayesian Linear Models: A Posterior Summary Perspective
Selecting a subset of variables for linear models remains an active area of research. This article reviews many of the recent contributions to the Bayesian model selection and shrinkage priorExpand
Bayesian adaptive Lasso
TLDR
This work provides a model selection machinery for the BaLasso by assessing the posterior conditional mode estimates, motivated by the hierarchical Bayesian interpretation of the Lasso, and provides a unified framework for variable selection using flexible penalties. Expand
Penalized regression, standard errors, and Bayesian lassos
Penalized regression methods for simultaneous variable selection and coe-cient estimation, especially those based on the lasso of Tibshirani (1996), have received a great deal of attention in recentExpand
The horseshoe estimator for sparse signals
This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator's advantages overExpand
Moments of a Class of Internally Truncated Normal Distributions
Moment expressions are derived for the internally truncated normal distributions commonly applied to screening and constrained problems. They are obtained from using a recursive relation between theExpand
GWASinlps: non‐local prior based iterative SNP selection tool for genome‐wide association studies
TLDR
A variable selection method, named, iterative non‐local prior based selection for GWAS, or GWASinlps, that combines the computational efficiency of the screen‐and‐select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of non‐ local priors in an iterative variable selection framework. Expand
An overview of reciprocal L1-regularization for high dimensional regression data
High dimensional data plays a key role in the modern statistical analysis. A common objective for the high dimensional data analysis is to perform model selection, and penalized likelihood method isExpand
High-Dimensional Variable Selection With Reciprocal L1-Regularization
During the past decade, penalized likelihood methods have been widely used in variable selection problems, where the penalty functions are typically symmetric about 0, continuous and nondecreasing inExpand
...
1
2
3
4
5
...