• Publications
  • Influence
Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets
TLDR
This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution.
Fast and Simple PCA via Convex Optimization
TLDR
This work shows how computing the leading principal component could be reduced to solving a small number of well-conditioned convex optimization problems, which gives rise to a new efficient method for PCA based on recent advances in stochastic methods for conveX optimization.
Old Evidence and Logical Omniscience in Bayesian Confirmation Theory
TLDR
There is an important class of scientific arguments, cases in which the authors are dealing with the apparent confirmation of new hypotheses by old evidence, for which the Bayesian account of confirmation seems hopelessly inadequate, and this essay shall examine this difficulty, what I call the problem of old evidence.
A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
TLDR
A novel conditional gradient algorithm for smooth and strongly convex optimization over polyhedral sets that performs only a single linear optimization step over the domain on each iteration and enjoys a linear convergence rate.
Online Principal Components Analysis
TLDR
This paper is the first to consider the online version of OPCA, where the vectors xt are presented to the algorithm one by one and for every presented xt the algorithm must output a vector yt before receiving xt+1 to compensate for the handicap of operating online.
A Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic Optimization
TLDR
A novel conditional gradient algorithm for smooth and strongly convex optimization over polyhedral sets that performs only a single linear optimization step over the domain on each iteration and enjoys a linear convergence rate, which gives an exponential improvement in convergence rate over previous results.
Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes
TLDR
A new conditional gradient variant and a corresponding analysis that improves on both of the above shortcomings, and a novel way to compute decomposition-invariant away-steps that applies to several important structured polytopes that capture central concepts.
Approximating Semidefinite Programs in Sublinear Time
TLDR
This work presents the first sublinear time approximation algorithm for semidefinite programs which it is believed may be useful for such problems in which the size of data may cause even linear time algorithms to have prohibitive running times in practice.
Faster Eigenvector Computation via Shift-and-Invert Preconditioning
TLDR
A robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems is given.
Online Learning of Eigenvectors
TLDR
New algorithms that guarantee regret rates with a mild dependence on the dimension at most are presented, which do not require any expensive matrix decompositions and also admit implementations that enable to leverage sparsity in the data to further reduce computation.
...
1
2
3
4
5
...