This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution.Expand

This work shows how computing the leading principal component could be reduced to solving a small number of well-conditioned convex optimization problems, which gives rise to a new efficient method for PCA based on recent advances in stochastic methods for conveX optimization.Expand

There is an important class of scientific arguments, cases in which the authors are dealing with the apparent confirmation of new hypotheses by old evidence, for which the Bayesian account of confirmation seems hopelessly inadequate, and this essay shall examine this difficulty, what I call the problem of old evidence.Expand

A novel conditional gradient algorithm for smooth and strongly convex optimization over polyhedral sets that performs only a single linear optimization step over the domain on each iteration and enjoys a linear convergence rate.Expand

This paper is the first to consider the online version of OPCA, where the vectors xt are presented to the algorithm one by one and for every presented xt the algorithm must output a vector yt before receiving xt+1 to compensate for the handicap of operating online.Expand

A novel conditional gradient algorithm for smooth and strongly convex optimization over polyhedral sets that performs only a single linear optimization step over the domain on each iteration and enjoys a linear convergence rate, which gives an exponential improvement in convergence rate over previous results.Expand

A new conditional gradient variant and a corresponding analysis that improves on both of the above shortcomings, and a novel way to compute decomposition-invariant away-steps that applies to several important structured polytopes that capture central concepts.Expand

This work presents the first sublinear time approximation algorithm for semidefinite programs which it is believed may be useful for such problems in which the size of data may cause even linear time algorithms to have prohibitive running times in practice.Expand

A robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems is given.Expand

New algorithms that guarantee regret rates with a mild dependence on the dimension at most are presented, which do not require any expensive matrix decompositions and also admit implementations that enable to leverage sparsity in the data to further reduce computation.Expand