• Publications
  • Influence
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
TLDR
This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight. Expand
Logarithmic regret algorithms for online convex optimization
TLDR
Several algorithms achieving logarithmic regret are proposed, which besides being more general are also much more efficient to implement, and give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Expand
Introduction to Online Convex Optimization
  • Elad Hazan
  • Computer Science, Mathematics
  • Found. Trends Optim.
  • 10 August 2016
TLDR
This monograph portrays optimization as a process, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. Expand
The Multiplicative Weights Update Method: a Meta-Algorithm and Applications
TLDR
A simple meta-algorithm is presented that unifies many of these disparate algorithms and derives them as simple instantiations of the meta-Algorithm. Expand
Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization
TLDR
An algorithm which performs only gradient updates with optimal rate of convergence is given, which is equivalent to stochastic convex optimization with a strongly convex objective. Expand
Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization
TLDR
This work introduces an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O∗( √ T ) regret and presents a novel connection between online learning and interior point methods. Expand
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
TLDR
This paper suggests that, sometimes, increasing depth can speed up optimization and proves that it is mathematically impossible to obtain the acceleration effect of overparametrization via gradients of any regularizer. Expand
Variance Reduction for Faster Non-Convex Optimization
TLDR
This work considers the fundamental problem in non-convex optimization of efficiently reaching a stationary point, and proposes a first-order minibatch stochastic method that converges with an $O(1/\varepsilon)$ rate, and is faster than full gradient descent by $\Omega(n^{1/3})$. Expand
Finding approximate local minima faster than gradient descent
We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number ofExpand
Projection-free Online Learning
TLDR
This work presents efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe technique, and obtains a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic online smooth conveX optimization. Expand
...
1
2
3
4
5
...