• Corpus ID: 22877890

# The Mixing method: coordinate descent for low-rank semidefinite programming

@article{Wang2017TheMM,
title={The Mixing method: coordinate descent for low-rank semidefinite programming},
author={Po-Wei Wang and Wei-Cheng Chang and J. Zico Kolter},
journal={ArXiv},
year={2017},
volume={abs/1706.00476}
}
• Published 1 June 2017
• Computer Science
• ArXiv
In this paper, we propose a coordinate descent approach to low-rank structured semidefinite programming. The approach, which we call the Mixing method, is extremely simple to implement, has no free parameters, and typically attains an order of magnitude or better improvement in optimization performance over the current state of the art. We show that for certain problems, the method is strictly decreasing and guaranteed to converge to a critical point. We then apply the algorithm to three…

## Figures from this paper

• Computer Science
Optimization
• 2020
This work aims to reduce this scalability gap by proposing a novel proximal algorithm for solving generalSemidefinite programming problems by exploiting the low-rank property inherent to several semidefinitely programming problems.
• Mathematics, Computer Science
• 2018
It is proved that the blockcoordinate maximization algorithm applied to the non-convex Burer-Monteiro approach enjoys a global sublinear rate without any assumptions on the problem, and a local linear convergence rate despite no local maxima is locally strongly concave.
• Computer Science
• 2022
The key characteristic of the proposed algorithm is to be able to exploit the low-rank property inherent to several semideﬁnite programming problems, which provides a substantial speedup and allows the operator splitting method to eﬃciently scale to larger instances.
• Computer Science
ArXiv
• 2021
We present a novel, practical, and provable approach for solving diagonally constrained semi-definite programming (SDP) problems at scale using accelerated non-convex programming. Our algorithm
• Mathematics, Computer Science
• 2022
It is proved that the block-coordinate maximization algorithm applied to the non-convex Burer-Monteiro method globally converges to a first-order stationary point with a sublinear rate without any assumptions on the problem.
• Computer Science
NeurIPS
• 2018
This work studies a novel convex estimator, based on an atomic norm specifically constructed to regularize the number of mixture components, for the estimation of generalized mixed regression, which gives a risk bound that trades off between prediction accuracy and model sparsity without imposing stringent assumptions on the input/output distribution.
• Computer Science
ArXiv
• 2018
A novel optimization procedure is proposed, Binary matching pursuit (BMP), that iteratively searches for binary bases via a MAXCUT-like boolean quadratic solver, guaranteed to achieve an suboptimal solution in O($1/\epsilon$) greedy steps, resulting in a trade-off between accuracy and sparsity.
• Computer Science
ArXiv
• 2020
This work describes the role of the constraint set in determining the solution to low-rank, positive semidefinite (PSD) matrix sensing problems and describes the radius of the set of PSD matrices that satisfy the measurements.
• Computer Science
• 2018
It is proved that BMP is guaranteed to converge sublinearly to the optimal solution and recover the factors under mild identifiability conditions and the application of BMP in quantifying neural interactions underlying high-resolution spatiotemporal ECoG recordings is showcased.
• Computer Science
ICML
• 2019
This paper introduces a differentiable (smoothed) maximum satisfiability (MAXSAT) solver that can be integrated into the loop of larger deep learning systems and demonstrates that by integrating this solver into end-to-end learning systems, this approach shows promise in integrating logical structures within deep learning.

## References

SHOWING 1-10 OF 29 REFERENCES

• Computer Science
NIPS
• 2016
It is shown that the low-rank Burer--Monteiro formulation of SDPs in that class almost never has any spurious local optima, including applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations.
• Computer Science
Math. Program.
• 2003
A nonlinear programming algorithm for solving semidefinite programs (SDPs) in standard form that replaces the symmetric, positive semideFinite variable X with a rectangular variable R according to the factorization X=RRT.
• Computer Science
• 2016
This work uses the Burer-Monteiro factorization approach to implicitly enforce low-rankness and focuses on constraint sets that include both positive semi-definite (PSD) constraints and specific matrix norm-constraints.
• Computer Science, Mathematics
SIAM J. Optim.
• 2000
This work proposes replacing the traditional polyhedral cutting plane model constructed from subgradient information by a semidefinite model that is tailored to eigenvalue problems, and presents numerical examples demonstrating the efficiency of the approach on combinatorial examples.
• G. Pataki
• Mathematics, Computer Science
Math. Oper. Res.
• 1998
It is proved that clustering must occur at extreme points of the set of optimal solutions, if the number of variables is sufficiently large and a lower bound on the multiplicity of the critical eigenvalue is given.
• Computer Science
CPAIOR
• 2006
Semidefinite Programming (SDP) based relaxations are surprisingly powerful, providing much tighter bounds than LP relaxations, across different constrainedness regions, and this shows the effectiveness of SDP relaxations in providing heuristic guidance for iterative variable setting, significantly more accurate than the guidance based on LP relaxation.
• Computer Science, Mathematics
NIPS
• 2015
This paper shows that for a family of quadratic nonsmooth problems, the complexity bounds for cyclic Block Coordinate Proximal Gradient (BCPG), a popular variant of BCD, can match those of the GD/PG in terms of dependency on K, and establishes an improved complexity bound for Coordinate Gradient Descent (CGD) for general convex problems which can match that of GD in certain scenarios.
• Mathematics, Computer Science
COLT
• 2016
This is the first paper to provide precise convergence rate guarantees for general convex functions under standard convex assumptions and to provide a procedure to initialize FGD for (restricted) strongly convex objectives and when one only has access to f via a first-order oracle.
• Computer Science
JACM
• 1995
This algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.