• Corpus ID: 88521228

Vanilla Lasso for sparse classification under single index models

@article{Liu2015VanillaLF,
  title={Vanilla Lasso for sparse classification under single index models},
  author={Jiyi Liu and Jinzhu Jia},
  journal={arXiv: Statistics Theory},
  year={2015}
}
This paper study sparse classification problems. We show that under single-index models, vanilla Lasso could give good estimate of unknown parameters. With this result, we see that even if the model is not linear, and even if the response is not continuous, we could still use vanilla Lasso to train classifiers. Simulations confirm that vanilla Lasso could be used to get a good estimation when data are generated from a logistic regression model. 

Figures from this paper

References

SHOWING 1-10 OF 14 REFERENCES
SEMIPARAMETRIC LEAST SQUARES (SLS) AND WEIGHTED SLS ESTIMATION OF SINGLE-INDEX MODELS
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
The Generalized Lasso With Non-Linear Observations
TLDR
The first theoretical accuracy guarantee for 1-b compressed sensing with unknown covariance matrix of the measurement vectors is given, and the single-index model of non-linearity is considered, allowing the non- linearity to be discontinuous, not one-to-one and even unknown.
AN EFFICIENT SEMIPARAMETRIC ESTIMATOR FOR BINARY RESPONSE MODELS
This paper proposes an estimator for discrete choice models that makes no assumption concerning the functional form of the choice probability function, where this function can be characterized by an
Optimal Smoothing in Single-index Models
Single-index models generalize linear regression. They have applications to a variety of fields, such as discrete choice analysis in econometrics and dose response models in biometrics, where
A direct approach to sparse discriminant analysis in ultra-high dimensions
TLDR
The theory shows that the method proposed can consistently identify the subset of discriminative features contributing to the Bayes rule and at the same time consistently estimate theBayes classification direction, even when the dimension can grow faster than any polynomial order of the sample size.
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
TLDR
This paper describes an efficient interior-point method for solving large-scale l1-regularized logistic regression problems, and shows how a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.
Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
TLDR
It is shown that an -sparse signal in can be accurately estimated from m = O(s log(n/s) single-bit measurements using a simple convex program, and the same conveX program works for virtually all generalized linear models, in which the link function may be unknown.
High-dimensional Ising model selection using ℓ1-regularized logistic regression
TLDR
It is proved that consistent neighborhood selection can be obtained for sample sizes $n=\Omega(d^3\log p)$ with exponentially decaying error, and when these same conditions are imposed directly on the sample matrices, it is shown that a reduced sample size suffices for the method to estimate neighborhoods consistently.
Semiparametric Estimation of Index Coefficients
This paper gives a solution to the problem of estimating coefficients of index models, through the estimation of the density-weighted average derivative of a general regression function. A normalized
...
1
2
...