Corpus ID: 7328395

# Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization

@inproceedings{Jaggi2013RevisitingFP,
title={Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization},
author={Martin Jaggi},
booktitle={ICML},
year={2013}
}
• Martin Jaggi
• Published in ICML 2013
• Mathematics, Computer Science
We provide stronger and more general primal-dual convergence results for Frank-Wolfe-type algorithms (a.k.a. [...] Key MethodOn the application side, this allows us to unify a large variety of existing sparse greedy methods, in particular for optimization over convex hulls of an atomic set, even if those sets can only be approximated, including sparse (or structured sparse) vectors or matrices, low-rank matrices, permutation matrices, or max-norm bounded matrices. We present a new general framework for…Expand
890 Citations
Stochastic Frank-Wolfe for Composite Convex Minimization
• Computer Science, Mathematics
• NeurIPS
• 2019
This work proposes the first conditional-gradient-type method for solving stochastic optimization problems under affine constraints, and guarantees O(k^{-1/3}) convergence rate in expectation on the objective residual and O(-5/12}) on the feasibility gap. Expand
Linear Convergence of Frank-Wolfe for Rank-One Matrix Recovery Without Strong Convexity
• D. Garber
• Mathematics, Computer Science
• ArXiv
• 2019
Under this condition, the standard Frank-Wolfe method with line-search, which only requires a single rank-one SVD computation per iteration, finds an $\epsilon$-approximated solution in only $O(\log{1/\ep silon})$ iterations (as opposed to the previous best known bound of $O(1/Ã¡¬1)$), despite the fact that the objective is not strongly convex. Expand
Stochastic Frank-Wolfe for Composite Convex Minimization
• Mathematics, Computer Science
• ArXiv
• 2019
A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affineExpand
Approximate Frank-Wolfe Algorithms over Graph-structured Support Sets
• Computer Science, Mathematics
• ArXiv
• 2021
Improved Frank-Wolfe algorithms to solve convex optimization problems over graph-structured support sets where the linear minimization oracle cannot be efficiently obtained in general are proposed, with significant improvement in recovering real-world images with graph- Structured sparsity. Expand
Frank-Wolfe Splitting via Augmented Lagrangian Method
• Mathematics, Computer Science
• AISTATS
• 2018
This work develops and analyzes the Frank-Wolfe Augmented Lagrangian (FW-AL) algorithm, a method for minimizing a smooth function over convex compact sets related by a "linear consistency" constraint that only requires access to a linear minimization oracle over the individual constraints. Expand
Frank-Wolfe Optimization for Symmetric-NMF under Simplicial Constraint
• Computer Science, Mathematics
• UAI
• 2018
This paper proposes a Frank-Wolfe (FW) solver to optimize the symmetric nonnegative matrix factorization problem under a simplicial constraint, which has recently been proposed for probabilistic clustering and proves an O(1/\varepsilon^2) convergence rate to KKT points, which matches the best known result in unconstrained nonconvex setting using gradient methods. Expand
Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets
• Mathematics, Computer Science
• ICML
• 2015
This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. Expand
Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees
• Computer Science, Mathematics
• NIPS
• 2017
This paper considers the intermediate case of optimization over the convex cone, parametrized as the conic hull of a generic atom set, leading to the first principled definitions of non-negative MP algorithms, which give explicit convergence rates and demonstrate excellent empirical performance. Expand
SVD-free Convex-Concave Approaches for Nuclear Norm Regularization
• Mathematics, Computer Science
• IJCAI
• 2017
This paper exploits the dual characterization of the nuclear norm to introduce a convex-concave optimization problem and design a subgradient-based algorithm without performing SVD, which is the first SVD-free convex optimization approach for nuclear-norm regularized problems that does not rely on the smoothness assumption. Expand
Non-convex Optimization with Frank-Wolfe Algorithm and Its Variants
• 2016
Recently, Frank-Wolfe (a.k.a. conditional gradient) algorithm has become a popular tool for tackling machine learning problems as it avoids the costly projection computation in traditionalExpand

#### References

SHOWING 1-10 OF 59 REFERENCES
Sparse convex optimization methods for machine learning
A convergence proof guaranteeing e-small error is given after O( 1e ) iterations, and the sparsity of approximate solutions for any `1-regularized convex optimization problem (and for optimization over the simplex), expressed as a function of the approximation quality. Expand
Chebushev Greedy Algorithm in convex optimization
AbstractChebyshev Greedy Algorithm is a generalization of the well knownOrthogonal Matching Pursuit deﬁned in a Hilbert space to the caseof Banach spaces. We apply this algorithm for constructingExpand
The Convex Geometry of Linear Inverse Problems
• Mathematics, Computer Science
• Found. Comput. Math.
• 2012
This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. Expand
Convex Sparse Matrix Factorizations
• Mathematics, Computer Science
• ArXiv
• 2008
This work presents a convex formulation of dictionary learning for sparse signal decomposition that introduces an explicit trade-off between size and sparsity of the decomposition of rectangular matrices and compares the estimation abilities of the convex and nonconvex approaches. Expand
A Singular Value Thresholding Algorithm for Matrix Completion
• Computer Science, Mathematics
• SIAM J. Optim.
• 2010
This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms. Expand
Greedy Algorithms for Structurally Constrained High Dimensional Problems
• Computer Science, Mathematics
• NIPS
• 2011
This framework not only unifies existing greedy algorithms by recovering them as special cases but also yields novel ones that solve convex optimization problems that arise when dealing with structurally constrained high-dimensional problems. Expand
Forward Basis Selection for Sparse Approximation over Dictionary
• Computer Science
• AISTATS
• 2012
This paper generalizes the forward greedy selection method to the setup of sparse approximation over a pre-fixed dictionary and proposes a fully corrective forward selection algorithm along with convergence analysis. Expand
Learning with Submodular Functions: A Convex Optimization Perspective
• F. Bach
• Computer Science, Mathematics
• Found. Trends Mach. Learn.
• 2013
In Learning with Submodular Functions: A Convex Optimization Perspective, the theory of submodular functions is presented in a self-contained way from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. Expand
Practical Large-Scale Optimization for Max-norm Regularization
• Computer Science, Mathematics
• NIPS
• 2010
This work uses a factorization technique of Burer and Monteiro to devise scalable first-order algorithms for convex programs involving the max-norm and these algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems. Expand
Greedy Approximation in Convex Optimization
It is shown how the technique developed in nonlinear approximation theory, in particular the greedy approximation technique, can be adjusted for finding a sparse solution of an optimization problem. Expand