#### Filter Results:

- Full text PDF available (101)

#### Publication Year

2001

2017

- This year (6)
- Last 5 years (62)
- Last 10 years (96)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- John C. Duchi, Elad Hazan, Yoram Singer
- COLT
- 2010

We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradientbased learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent… (More)

- Elad Hazan, Adam Tauman Kalai, Satyen Kale, Amit Agarwal
- Machine Learning
- 2006

In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated… (More)

- Sanjeev Arora, Elad Hazan, Satyen Kale
- Theory of Computing
- 2012

Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple… (More)

- Jacob D. Abernethy, Elad Hazan, Alexander Rakhlin
- COLT
- 2008

We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O∗( √ T ) regret. The setting is a natural generalization of the nonstochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We… (More)

- Elad Hazan, Satyen Kale
- COLT
- 2011

We give a novel algorithm for stochastic strongly-convex optimization in the gradient oracle model which returns an O( 1 T )-approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O( log(T ) T ), which was obtained by applying an online… (More)

- Elad Hazan
- LATIN
- 2008

We propose an algorithm for approximately maximizing a concave function over the bounded semi-definite cone, which produces sparse solutions. Sparsity for SDP corresponds to low rank matrices, and is a important property for both computational as well as learning theoretic reasons. As an application, building on Aaronson’s recent work, we derive a linear… (More)

- Elad Hazan
- 2011

A well studied and general setting for prediction and decision making is regret minimization in games. Recently the design of algorithms in this setting has been influenced by tools from convex optimization. In this chapter we describe the recent framework of online convex optimization which naturally merges optimization and regret minimization. We describe… (More)

- Amit Agarwal, Elad Hazan, Satyen Kale, Robert E. Schapire
- ICML
- 2006

We experimentally study on-line investment algorithms first proposed by Agarwal and Hazan and extended by Hazan et al. which achieve almost the same wealth as the best constant-rebalanced portfolio determined in hindsight. These algorithms are the first to combine optimal logarithmic regret bounds with efficient deterministic computability. They are based… (More)

- Peter L. Bartlett, Elad Hazan, Alexander Rakhlin
- NIPS
- 2007

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the… (More)

- Dan Garber, Elad Hazan
- ArXiv
- 2015

The problem of principle component analysis (PCA) is traditionally solved by spectral or algebraic methods. We show how computing the leading principal component could be reduced to solving a small number of well-conditioned convex optimization problems. This gives rise to a new efficient method for PCA based on recent advances in stochastic methods for… (More)