Multi-stage Convex Relaxation for Feature Selection

@article{Zhang2011MultistageCR,
  title={Multi-stage Convex Relaxation for Feature Selection},
  author={T. Zhang},
  journal={arXiv: Machine Learning},
  year={2011}
}
  • T. Zhang
  • Published 2011
  • Mathematics
  • arXiv: Machine Learning
  • A number of recent work studied the effectiveness of feature selection using Lasso. It is known that under the restricted isometry properties (RIP), Lasso does not generally lead to the exact recovery of the set of nonzero coefficients, due to the looseness of convex relaxation. This paper considers the feature selection property of nonconvex regularization, where the solution is given by a multi-stage convex relaxation scheme. Under appropriate conditions, we show that the local solution… CONTINUE READING
    91 Citations

    Figures and Tables from this paper.

    A Unifying Framework of High-Dimensional Sparse Estimation with Difference-of-Convex (DC) Regularizations
    • 1
    • Highly Influenced
    • PDF
    Efficient Sparse Group Feature Selection via Nonconvex Optimization
    • 51
    • PDF
    Efficient nonconvex sparse group feature selection via continuous and discrete optimization
    • 19
    A Theory of High-dimensional Sparse Estimation via Non-Convex Regularized Regression
    Proximal gradient method with automatic selection of the parameter by automatic differentiation
    • 1
    Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression
    • 11
    • PDF
    CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
    • 102
    • PDF
    Efficient Methods for Overlapping Group Lasso
    • Lei Yuan, J. Liu, Jieping Ye
    • Mathematics, Computer Science
    • IEEE Transactions on Pattern Analysis and Machine Intelligence
    • 2013
    • 169
    • PDF
    Separating variables to accelerate non-convex regularized optimization

    References

    SHOWING 1-10 OF 22 REFERENCES
    Analysis of Multi-stage Convex Relaxation for Sparse Regularization
    • Tong Zhang
    • Mathematics, Computer Science
    • J. Mach. Learn. Res.
    • 2010
    • 383
    • Highly Influential
    • PDF
    Some sharp performance bounds for least squares regression with L1 regularization
    • 243
    • Highly Influential
    • PDF
    The sparsity and bias of the Lasso selection in high-dimensional linear regression
    • 704
    • PDF
    Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso)
    • M. Wainwright
    • Mathematics, Computer Science
    • IEEE Transactions on Information Theory
    • 2009
    • 1,041
    • PDF
    Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
    • Tong Zhang
    • Mathematics, Computer Science
    • IEEE Transactions on Information Theory
    • 2011
    • 216
    • PDF
    Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
    • 5,983
    • PDF
    Nearly unbiased variable selection under minimax concave penalty
    • 2,078
    • Highly Influential
    • PDF
    Regression Shrinkage and Selection via the Lasso
    • 29,842
    • PDF