# A Direct Formulation for Sparse Pca Using Semidefinite Programming

@article{dAspremont2004ADF,
title={A Direct Formulation for Sparse Pca Using Semidefinite Programming},
author={Alexandre d’Aspremont and Laurent El Ghaoui and Michael I. Jordan and Gert R. G. Lanckriet},
journal={Microeconomic Theory eJournal},
year={2004}
}
• Published 1 June 2004
• Computer Science, Mathematics
• Microeconomic Theory eJournal
We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a…
1,000 Citations
We examine the problem of approximating a positive, semidefinite matrix § by a dyad xx T , with a penalty on the cardinality of the vector x. This problem arises in sparse principal component
This work presents a hybrid algorithm for optimizing a convex, smooth function over the cone of positive semidefinite matrices and shows experimental results on three machine learning problems (matrix completion, metric learning, and sparse PCA).
• Computer Science
ArXiv
• 2008
This work presents a convex formulation of dictionary learning for sparse signal decomposition that introduces an explicit trade-off between size and sparsity of the decomposition of rectangular matrices and compares the estimation abilities of the convex and nonconvex approaches.
• Mathematics, Computer Science
• 2008
An algorithm for solving nonlinear convex programs defined in terms of a symmetric positive semidefinite matrix variable X, which rests on the factorization X = Y Y T, where the number of columns of Y fixes the rank of X.
• Computer Science
NIPS
• 2014
This work proposes an active set algorithm leveraging the structure of the convex problem to solve the sparse matrix factorization problems and shows promising numerical results.
A new convex formulation for the sparse matrix factorization problem, where the number of nonzero elements of the factors is fixed, is proposed and the Gaussian complexity for the suggested norms and their vector analogues is estimated.
It is shown that, if the rank of the covariance matrix is a ﬁxed value, then there is an algorithm that solves sparse PCA to global optimality, whose running time is polynomial in the number of features.
• Computer Science
NIPS
• 2015
We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With
• Computer Science
SIAM J. Math. Data Sci.
• 2021
This research answers fundamental questions about the existence and uniqueness of low-rank positive-semidefinite decompositions and leads to tractable factorization algorithms that succeed under a mild deterministic condition.

## References

SHOWING 1-10 OF 36 REFERENCES

• Computer Science, Mathematics
Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148)
• 2001
It is shown that the heuristic to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm, can be reduced to a semidefinite program, hence efficiently solved.
• Computer Science, Mathematics
SIAM J. Optim.
• 2000
This work proposes replacing the traditional polyhedral cutting plane model constructed from subgradient information by a semidefinite model that is tailored to eigenvalue problems, and presents numerical examples demonstrating the efficiency of the approach on combinatorial examples.
It is argued that many known interior point methods for linear programs can be transformed in a mechanical way to algorithms for SDP with proofs of convergence and polynomial time complexity carrying over in a similar fashion.
• Computer Science
SIAM J. Matrix Anal. Appl.
• 2002
This work considers the problem of computing low-rank approximations of matrices in a factorized form with sparse factors and presents numerical examples arising from some application areas to illustrate the efficiency and accuracy of the proposed algorithms.
• Mathematics
• 1999
We show that it is fruitful to dualize the integrality constraints in a combinatorial optimization problem. First, this reproduces the known SDP relaxations of the max-cut and max-stable problems.
• Computer Science
• 1999
This paper describes how to work with SeDuMi, an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints by exploiting sparsity.
• Mathematics, Computer Science
Math. Program.
• 2005
A new subgradient-type method for minimizing extremely large-scale nonsmooth convex functions over “simple” domains, allowing for flexible handling of accumulated information and tradeoff between the level of utilizing this information and iteration’s complexity.
• Mathematics
SIAM J. Optim.
• 1997
The most important part of this study concerns second-order differentiability: existence of a second- order development of f implies that its regularization has a Hessian.
We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone
A new approach for constructing efficient schemes for non-smooth convex optimization is proposed, based on a special smoothing technique, which can be applied to functions with explicit max-structure, and can be considered as an alternative to black-box minimization.