# Restarting Frank-Wolfe: Faster Rates Under Hölderian Error Bounds

@article{Kerdreux2018RestartingFF, title={Restarting Frank-Wolfe: Faster Rates Under H{\"o}lderian Error Bounds}, author={Thomas Kerdreux and Alexandre d'Aspremont and Sebastian Pokutta}, journal={arXiv: Optimization and Control}, year={2018} }

Conditional Gradients (aka Frank-Wolfe algorithms) form a classical set of methods for constrained smooth convex minimization due to their simplicity, the absence of projection steps, and competitive numerical performance. While the vanilla Frank-Wolfe algorithm only ensures a worst-case rate of $\mathcal{O}(1/\epsilon)$, various recent results have shown that for strongly convex functions on polytopes, the method can be slightly modified to achieve linear convergence. However, this still…

## Figures from this paper

## 2 Citations

Parameter-free Locally Accelerated Conditional Gradients

- Computer Science, MathematicsICML
- 2021

A novel, Parameter-Free Locally accelerated CG (PF-La CG) algorithm, for which rigorous convergence guarantees are provided, which demonstrates local acceleration and showcases the practical improvements of PF-LaCG over non-accelerated algorithms, both in terms of iteration count and wall-clock time.

Blended Conditional Gradients: the unconditioning of conditional gradients

- Computer Science, MathematicsICML 2019
- 2018

This work presents a blended conditional gradient approach for minimizing a smooth convex function over a polytope P, combining the Frank--Wolfe algorithm with gradient-based steps, achieving linear convergence for strongly convex functions, along with good practical performance.

## References

SHOWING 1-10 OF 60 REFERENCES

d’Aspremont. Sharpness, restart, and acceleration

- SIAM Journal on Optimization,
- 2020

A

- d’Aspremont, and S. Pokutta. Restarting Frank-Wolfe. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1275–1283
- 2019

Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets

- Mathematics, Computer ScienceICML
- 2015

This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution.

On the Global Linear Convergence of Frank-Wolfe Optimization Variants

- Mathematics, Computer ScienceNIPS
- 2015

This paper highlights and clarify several variants of the Frank-Wolfe optimization algorithm that have been successfully applied in practice: away-steps FW, pairwise FW, fully-corrective FW and Wolfe's minimum norm point algorithm, and proves for the first time that they all enjoy global linear convergence, under a weaker condition than strong convexity of the objective.

Boosting Frank-Wolfe by Chasing Gradients

- Computer Science, MathematicsICML
- 2020

This paper proposes to speed up the Frank-Wolfe algorithm by better aligning the descent direction with that of the negative gradient via a subroutine, and derives convergence rates to $\mathcal{O}(1/t)$ to $e^{-\omega t})$ of the method.

Locally Accelerated Conditional Gradients

- Computer Science, MathematicsAISTATS
- 2020

This work presents Locally Accelerated Conditional Gradients -- an algorithmic framework that couples accelerated steps with conditional gradient steps to achieve local acceleration on smooth strongly convex problems and achieves the optimal accelerated local convergence.

Polytope Conditioning and Linear Convergence of the Frank-Wolfe Algorithm

- Mathematics, Computer ScienceMath. Oper. Res.
- 2019

For a convex quadratic objective, it is shown that the rate of convergence is determined by a condition number of a suitably scaled polytope, and new insight is given into the linear convergence property.

Restarting Frank-Wolfe

- Mathematics, Computer ScienceAISTATS
- 2019

A new variant of Conditional Gradients is presented, that can dynamically adapt to the function's geometric properties using restarts and thus smoothly interpolates between the sublinear and linear regimes and applies to generic compact convex constraint sets.

Blended Conditional Gradients: the unconditioning of conditional gradients

- Computer Science, MathematicsICML 2019
- 2018

This work presents a blended conditional gradient approach for minimizing a smooth convex function over a polytope P, combining the Frank--Wolfe algorithm with gradient-based steps, achieving linear convergence for strongly convex functions, along with good practical performance.