# High-dimensional regression with potential prior information on variable importance

@inproceedings{Stokell2021HighdimensionalRW, title={High-dimensional regression with potential prior information on variable importance}, author={Benjamin G. Stokell and Rajen D. Shah}, year={2021} }

There are a variety of settings where vague prior information may be available on the importance of predictors in high-dimensional regression settings. Examples include ordering on the variables offered by their empirical variances (which is typically discarded through standardisation), the lag of predictors when fitting autoregressive models in time series settings, or the level of missingness of the variables. Whilst such orderings may not match the true importance of variables, we argue that… Expand

#### References

SHOWING 1-10 OF 24 REFERENCES

On Model Selection Consistency of Lasso

- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 2006

It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large. Expand

Regression Shrinkage and Selection via the Lasso

- Mathematics
- 1996

SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a… Expand

Model selection and estimation in regression with grouped variables

- Mathematics
- 2006

Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor… Expand

CoCoLasso for High-dimensional Error-in-variables Regression

- Mathematics
- 2015

Much theoretical and applied work has been devoted to high-dimensional regression with clean data. However, we often face corrupted data in many applications where missing data and measurement errors… Expand

High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity

- Computer Science, Mathematics
- NIPS
- 2011

This work is able to both analyze the statistical error associated with any global optimum, and prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. Expand

High-dimensional principal component analysis with heterogeneous missingness

- Mathematics
- 2019

We study the problem of high-dimensional Principal Component Analysis (PCA) with missing observations. In simple, homogeneous missingness settings with a noise level of constant order, we show that… Expand

An Ordered Lasso and Sparse Time-Lagged Regression

- Mathematics, Computer Science
- Technometrics
- 2016

An order-constrained version of ℓ1-regularized regression (Lasso) is proposed, and it is shown how to solve it efficiently using the well-known pool adjacent violators algorithm as its proximal operator. Expand

Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming

- Mathematics
- 2010

We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are… Expand

Double-estimation-friendly inference for high-dimensional misspecified models

- Mathematics
- 2019

All models may be wrong---but that is not necessarily a problem for inference. Consider the standard $t$-test for the significance of a variable $X$ for predicting response $Y$ whilst controlling for… Expand

Scaled sparse linear regression

- Mathematics
- 2011

Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating… Expand