• Publications
  • Influence
Lower bounds for finding stationary points I
TLDR
The lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newton’s method, and generalized pth order regularization are worst-case optimal within their natural function classes.
Accelerated Methods for Non-Convex Optimization
TLDR
The method improves upon the complexity of gradient descent and provides the additional second-order guarantee that $\nabla^2 f(x) \succeq -O(\epsilon^{1/2})I$ for the computed $x$.
A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization
TLDR
This paper improves upon the running time for finding a point in a convex set given a separation oracle and achieves the first quadratic bound on the query complexity for the independence and rank oracles.
Path Finding Methods for Linear Programming: Solving Linear Programs in Õ(vrank) Iterations and Faster Algorithms for Maximum Flow
  • Y. Lee, Aaron Sidford
  • Computer Science
    IEEE 55th Annual Symposium on Foundations of…
  • 18 October 2014
TLDR
A new algorithm for solving linear programs that requires only Õ(√rank(A)L) iterations where A is the constraint matrix of a linear program with m constraints, n variables, and bit complexity L is presented.
Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
TLDR
This paper shows how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box, and improves the convergence guarantees for Kaczmarz methods.
Accelerated Methods for NonConvex Optimization
TLDR
This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications.
"Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions
TLDR
A variant of Nesterov's accelerated gradient descent (AGD) for minimization of smooth non-convex functions is developed and analyzed and it is proved that one of two cases occurs: either the AGD variant converges quickly, as if the function was convex, or a certificate that the function is "guilty" of being non-Convex is produced.
A simple, combinatorial algorithm for solving SDD systems in nearly-linear time
TLDR
A simple combinatorial algorithm that solves symmetric diagonally dominant (SDD) linear systems in nearly-linear time and has the fastest known running time under the standard unit-cost RAM model.
Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model
TLDR
The method is extended to computing $\epsilon$-optimal policies for finite-horizon MDP with a generative model and matches the sample complexity lower bounds proved in \cite{azar2013minimax} up to logarithmic factors.
Near-optimal method for highly smooth convex optimization
TLDR
A method with rate of convergence $\tilde{O}(1/k^{\frac{ 3p +1}{2}})$ after queries to the oracle for any convex function whose p^{th}$ order derivative is Lipschitz.
...
...