• Corpus ID: 22877890

The Mixing method: coordinate descent for low-rank semidefinite programming

@article{Wang2017TheMM,
  title={The Mixing method: coordinate descent for low-rank semidefinite programming},
  author={Po-Wei Wang and Wei-Cheng Chang and J. Zico Kolter},
  journal={ArXiv},
  year={2017},
  volume={abs/1706.00476}
}
In this paper, we propose a coordinate descent approach to low-rank structured semidefinite programming. The approach, which we call the Mixing method, is extremely simple to implement, has no free parameters, and typically attains an order of magnitude or better improvement in optimization performance over the current state of the art. We show that for certain problems, the method is strictly decreasing and guaranteed to converge to a critical point. We then apply the algorithm to three… 

Figures from this paper

Exploiting low-rank structure in semidefinite programming by approximate operator splitting

This work aims to reduce this scalability gap by proposing a novel proximal algorithm for solving generalSemidefinite programming problems by exploiting the low-rank property inherent to several semidefinitely programming problems.

On the Convergence of Block-Coordinate Maximization for Burer-Monteiro Method

It is proved that the blockcoordinate maximization algorithm applied to the non-convex Burer-Monteiro approach enjoys a global sublinear rate without any assumptions on the problem, and a local linear convergence rate despite no local maxima is locally strongly concave.

low-rank structure in semidefinite programming by approximate operator splitting.

The key characteristic of the proposed algorithm is to be able to exploit the low-rank property inherent to several semidefinite programming problems, which provides a substantial speedup and allows the operator splitting method to efficiently scale to larger instances.

Momentum-inspired Low-Rank Coordinate Descent for Diagonally Constrained SDPs

We present a novel, practical, and provable approach for solving diagonally constrained semi-definite programming (SDP) problems at scale using accelerated non-convex programming. Our algorithm

90C22 Semidefinite programming 90C25 Convex programming 90C26 Nonconvex programming, global optimization 90C27 Combinatorial optimization 90C30 Nonlinear programming 58C05 Real-valued functions on manifolds 49M37 Numerical methods based on nonlinear programming

It is proved that the block-coordinate maximization algorithm applied to the non-convex Burer-Monteiro method globally converges to a first-order stationary point with a sublinear rate without any assumptions on the problem.

MixLasso: Generalized Mixed Regression via Convex Atomic-Norm Regularization

This work studies a novel convex estimator, based on an atomic norm specifically constructed to regularize the number of mixture components, for the estimation of generalized mixed regression, which gives a risk bound that trades off between prediction accuracy and model sparsity without imposing stringent assumptions on the input/output distribution.

Learning Tensor Latent Features

A novel optimization procedure is proposed, Binary matching pursuit (BMP), that iteratively searches for binary bases via a MAXCUT-like boolean quadratic solver, guaranteed to achieve an suboptimal solution in O($1/\epsilon$) greedy steps, resulting in a trade-off between accuracy and sparsity.

Rank-One Measurements of Low-Rank PSD Matrices Have Small Feasible Sets

This work describes the role of the constraint set in determining the solution to low-rank, positive semidefinite (PSD) matrix sensing problems and describes the radius of the set of PSD matrices that satisfy the measurements.

Efficient Tensor Decomposition with Boolean Factors.

It is proved that BMP is guaranteed to converge sublinearly to the optimal solution and recover the factors under mild identifiability conditions and the application of BMP in quantifying neural interactions underlying high-resolution spatiotemporal ECoG recordings is showcased.

SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver

This paper introduces a differentiable (smoothed) maximum satisfiability (MAXSAT) solver that can be integrated into the loop of larger deep learning systems and demonstrates that by integrating this solver into end-to-end learning systems, this approach shows promise in integrating logical structures within deep learning.

References

SHOWING 1-10 OF 29 REFERENCES

The non-convex Burer-Monteiro approach works on smooth semidefinite programs

It is shown that the low-rank Burer--Monteiro formulation of SDPs in that class almost never has any spurious local optima, including applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations.

A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization

A nonlinear programming algorithm for solving semidefinite programs (SDPs) in standard form that replaces the symmetric, positive semideFinite variable X with a rectangular variable R according to the factorization X=RRT.

Provable Burer-Monteiro factorization for a class of norm-constrained matrix problems

This work uses the Burer-Monteiro factorization approach to implicitly enforce low-rankness and focuses on constraint sets that include both positive semi-definite (PSD) constraints and specific matrix norm-constraints.

A Spectral Bundle Method for Semidefinite Programming

This work proposes replacing the traditional polyhedral cutting plane model constructed from subgradient information by a semidefinite model that is tailored to eigenvalue problems, and presents numerical examples demonstrating the efficiency of the approach on combinatorial examples.

Low-rank matrix completion via preconditioned optimization on the Grassmann manifold

On the Rank of Extreme Matrices in Semidefinite Programs and the Multiplicity of Optimal Eigenvalues

  • G. Pataki
  • Mathematics, Computer Science
    Math. Oper. Res.
  • 1998
It is proved that clustering must occur at extreme points of the set of optimal solutions, if the number of variables is sufficiently large and a lower bound on the multiplicity of the critical eigenvalue is given.

The Power of Semidefinite Programming Relaxations for MAX-SAT

Semidefinite Programming (SDP) based relaxations are surprisingly powerful, providing much tighter bounds than LP relaxations, across different constrainedness regions, and this shows the effectiveness of SDP relaxations in providing heuristic guidance for iterative variable setting, significantly more accurate than the guidance based on LP relaxation.

Improved Iteration Complexity Bounds of Cyclic Block Coordinate Descent for Convex Problems

This paper shows that for a family of quadratic nonsmooth problems, the complexity bounds for cyclic Block Coordinate Proximal Gradient (BCPG), a popular variant of BCD, can match those of the GD/PG in terms of dependency on K, and establishes an improved complexity bound for Coordinate Gradient Descent (CGD) for general convex problems which can match that of GD in certain scenarios.

Dropping Convexity for Faster Semi-definite Optimization

This is the first paper to provide precise convergence rate guarantees for general convex functions under standard convex assumptions and to provide a procedure to initialize FGD for (restricted) strongly convex objectives and when one only has access to f via a first-order oracle.

Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming

This algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.