Corpus ID: 88514960

Dual Lasso Selector

  title={Dual Lasso Selector},
  author={Niharika Gauraha},
  journal={arXiv: Applications},
We consider the problem of model selection and estimation in sparse high dimensional linear regression models with strongly correlated variables. First, we study the theoretical properties of the dual Lasso solution, and we show that joint consideration of the Lasso primal and its dual solutions are useful for selecting correlated active variables. Second, we argue that correlations among active predictors are not problematic, and we derive a new weaker condition on the design matrix, called… Expand
2 Citations
Sparse Regression and Adaptive Feature Generation for the Discovery of Dynamical Systems
A novel algorithm is proposed that learns the candidate function library in a completely data-driven manner to distill the governing equations of the dynamical system through sequentially thresholded ridge regression over a orthogonal polynomial space. Expand
Dynamic Data Driven Applications Systems: Third International Conference, DDDAS 2020, Boston, MA, USA, October 2-4, 2020, Proceedings
This keynote talk presents an approach to create, update, and deploy data-driven physics-based digital twins and demonstrates the approach through the development of a structural digital twin for a custom-built unmanned aerial vehicle. Expand


Penalized regression combining the L1 norm and a correlation based penalty.
We consider the problem of feature selection in linear regression model with p covariates and n observations. We propose a new method to simultaneously select variables and favor a grouping effect,Expand
Lasso screening rules via dual polytope projection
An efficient and effective screening rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual space is a convex and closed polytope. Expand
On Model Selection Consistency of Lasso
  • P. Zhao, Bin Yu
  • Mathematics, Computer Science
  • J. Mach. Learn. Res.
  • 2006
It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large. Expand
Least Squares After Model Selection in High-Dimensional Sparse Models
In this paper we study post-model selection estimators which apply ordinary least squares (ols) to the model selected by first-step penalized estimators, typically lasso. It is well known that lassoExpand
Sparse regression with exact clustering
This dissertation deals with three closely related topics of the lasso in addition to supplying a comprehensive overview of the rapidly growing literature in this field. The first part aims atExpand
Asymptotic Properties of Lasso+mLS and Lasso+Ridge in Sparse High-dimensional Linear Regression
We study the asymptotic properties of Lasso+mLS and Lasso+Ridge under the sparse high-dimensional linear regression model: Lasso selecting predictors and then modified Least Squares (mLS) or RidgeExpand
Stability Feature Selection using Cluster Representative LASSO
This work proposes to cluster the variables first and then do stability feature selection using Lasso for cluster representatives and finds an optimal and consistent solution for group variable selection in high-dimensional regression setting. Expand
Correlated variables in regression: Clustering and sparse estimation
We consider estimation in a high-dimensional linear model with strongly correlated variables. We propose to cluster the variables first and do subsequent sparse estimation such as the Lasso forExpand
The Lasso, correlated design, and improved oracle inequalities
We study high-dimensional linear models and the $\ell_1$-penalized least squares estimator, also known as the Lasso estimator. In literature, oracle inequalities have been derived under restrictedExpand
Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR.
A new method called the OSCAR (octagonal shrinkage and clustering algorithm for regression) is proposed to simultaneously select variables while grouping them into predictive clusters, in addition to improving prediction accuracy and interpretation. Expand