Corpus ID: 119664780

Restarting Frank-Wolfe

@inproceedings{Kerdreux2018RestartingF,
  title={Restarting Frank-Wolfe},
  author={T. Kerdreux and A. d'Aspremont and Sebastian Pokutta},
  booktitle={AISTATS},
  year={2018}
}
  • T. Kerdreux, A. d'Aspremont, Sebastian Pokutta
  • Published in AISTATS 2018
  • Computer Science, Mathematics, Philosophy
  • Conditional Gradients (aka Frank-Wolfe algorithms) form a classical set of methods for constrained smooth convex minimization due to their simplicity, the absence of projection step, and competitive numerical performance. While the vanilla Frank-Wolfe algorithm only ensures a worst-case rate of $O(1/\epsilon)$, various recent results have shown that for strongly convex functions, the method can be slightly modified to achieve linear convergence. However, this still leaves a huge gap between… CONTINUE READING
    Blended Conditional Gradients: the unconditioning of conditional gradients
    11
    Locally Accelerated Conditional Gradients
    3
    Primal-Dual Block Frank-Wolfe
    Active set complexity of the Away-step Frank-Wolfe Algorithm
    1
    Sharpness, Restart and Acceleration
    32
    Primal-Dual Block Generalized Frank-Wolfe
    4
    Second-order Conditional Gradients
    1
    Restarting Algorithms: Sometimes There Is Free Lunch
    Blended Matching Pursuit

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 41 REFERENCES
    On the Global Linear Convergence of Frank-Wolfe Optimization Variants
    223
    Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets
    102
    Frank-Wolfe Method is Automatically Adaptive to Error Bound Condition
    5
    Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes
    24
    Polytope Conditioning and Linear Convergence of the Frank-Wolfe Algorithm
    25
    Blended Conditional Gradients: the unconditioning of conditional gradients
    11
    Conditional Gradient Sliding for Convex Optimization
    • G. Lan, Yi Zhou
    • Mathematics, Computer Science
    • 2016
    88
    Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
    326
    SADAGRAD: Strongly Adaptive Stochastic Gradient Methods
    13