A Direct Formulation for Sparse Pca Using Semidefinite Programming

@article{dAspremont2004ADF,
  title={A Direct Formulation for Sparse Pca Using Semidefinite Programming},
  author={Alexandre d’Aspremont and Laurent El Ghaoui and Michael I. Jordan and Gert R. G. Lanckriet},
  journal={Microeconomic Theory eJournal},
  year={2004}
}
We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a… 

On the Quality of a Semidefinite Programming Bound for Sparse Principal Component Analysis

We examine the problem of approximating a positive, semidefinite matrix § by a dyad xx T , with a penalty on the cardinality of the vector x. This problem arises in sparse principal component

A Hybrid Algorithm for Convex Semidefinite Optimization

This work presents a hybrid algorithm for optimizing a convex, smooth function over the cone of positive semidefinite matrices and shows experimental results on three machine learning problems (matrix completion, metric learning, and sparse PCA).

Convex Sparse Matrix Factorizations

This work presents a convex formulation of dictionary learning for sparse signal decomposition that introduces an explicit trade-off between size and sparsity of the decomposition of rectangular matrices and compares the estimation abilities of the convex and nonconvex approaches.

Low-rank optimization for semidefinite convex problems

An algorithm for solving nonlinear convex programs defined in terms of a symmetric positive semidefinite matrix variable X, which rests on the factorization X = Y Y T, where the number of columns of Y fixes the rank of X.

Tight convex relaxations for sparse matrix factorization

This work proposes an active set algorithm leveraging the structure of the convex problem to solve the sparse matrix factorization problems and shows promising numerical results.

(k, q)-trace norm for sparse matrix factorization

A new convex formulation for the sparse matrix factorization problem, where the number of nonzero elements of the factors is fixed, is proposed and the Gaussian complexity for the suggested norms and their vector analogues is estimated.

Sparse PCA on fixed-rank matrices

It is shown that, if the rank of the covariance matrix is a fixed value, then there is an algorithm that solves sparse PCA to global optimality, whose running time is polynomial in the number of features.

A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements

We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With

Binary Component Decomposition Part I: The Positive-Semidefinite Case

This research answers fundamental questions about the existence and uniqueness of low-rank positive-semidefinite decompositions and leads to tractable factorization algorithms that succeed under a mild deterministic condition.
...

References

SHOWING 1-10 OF 36 REFERENCES

A rank minimization heuristic with application to minimum order system approximation

It is shown that the heuristic to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm, can be reduced to a semidefinite program, hence efficiently solved.

A Spectral Bundle Method for Semidefinite Programming

This work proposes replacing the traditional polyhedral cutting plane model constructed from subgradient information by a semidefinite model that is tailored to eigenvalue problems, and presents numerical examples demonstrating the efficiency of the approach on combinatorial examples.

Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization

It is argued that many known interior point methods for linear programs can be transformed in a mechanical way to algorithms for SDP with proofs of convergence and polynomial time complexity carrying over in a similar fashion.

Low-Rank Approximations with Sparse Factors I: Basic Algorithms and Error Analysis

This work considers the problem of computing low-rank approximations of matrices in a factorized form with sparse factors and presents numerical examples arising from some application areas to illustrate the efficiency and accuracy of the proposed algorithms.

Semidefinite Relaxations and Lagrangian Duality with Application to Combinatorial Optimization

We show that it is fruitful to dualize the integrality constraints in a combinatorial optimization problem. First, this reproduces the known SDP relaxations of the max-cut and max-stable problems.

A Matlab toolbox for optimization over symmetric cones

This paper describes how to work with SeDuMi, an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints by exploiting sparsity.

Non-euclidean restricted memory level method for large-scale convex optimization

A new subgradient-type method for minimizing extremely large-scale nonsmooth convex functions over “simple” domains, allowing for flexible handling of accumulated information and tradeoff between the level of utilizing this information and iteration’s complexity.

Practical Aspects of the Moreau-Yosida Regularization: Theoretical Preliminaries

The most important part of this study concerns second-order differentiability: existence of a second- order development of f implies that its regularization has a Hessian.

Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems

We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone

Smooth minimization of non-smooth functions

A new approach for constructing efficient schemes for non-smooth convex optimization is proposed, based on a special smoothing technique, which can be applied to functions with explicit max-structure, and can be considered as an alternative to black-box minimization.