#### Filter Results:

- Full text PDF available (97)

#### Publication Year

2001

2017

- This year (4)
- Last 5 years (59)
- Last 10 years (93)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- John C. Duchi, Elad Hazan, Yoram Singer
- COLT
- 2010

We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent… (More)

- Elad Hazan, Adam Tauman Kalai, Satyen Kale, Amit Agarwal
- Machine Learning
- 2006

In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated… (More)

- Sanjeev Arora, Elad Hazan, Satyen Kale
- Theory of Computing
- 2012

Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple… (More)

- Jacob D. Abernethy, Elad Hazan, Alexander Rakhlin
- COLT
- 2008

We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O * (√ T) regret. The setting is a natural generalization of the non-stochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We… (More)

- Elad Hazan, Satyen Kale
- COLT
- 2011

We give a novel algorithm for stochastic strongly-convex optimization in the gradient oracle model which returns an O(1 T)-approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T) T), which was obtained by applying an online strongly-convex… (More)

- Elad Hazan
- LATIN
- 2008

We propose an algorithm for approximately maximizing a concave function over the bounded semi-definite cone, which produces sparse solutions. Sparsity for SDP corresponds to low rank matrices, and is a important property for both computational as well as learning theoretic reasons. As an application, building on Aaronson's recent work, we derive a linear… (More)

- Elad Hazan
- 2011

A well studied and general setting for prediction and decision making is regret minimization in games. Recently the design of algorithms in this setting has been influenced by tools from convex optimization. In this chapter we describe the recent framework of online convex optimization which naturally merges optimization and regret minimization. We describe… (More)

- Elad Hazan, Satyen Kale
- Machine Learning
- 2008

Prediction from expert advice is a fundamental problem in machine learning. A major pillar of the field is the existence of learning algorithms whose average loss approaches that of the best expert in hindsight (in other words, whose average regret approaches zero). Traditionally the regret of online algorithms was bounded in terms of the number of… (More)

- Naman Agarwal, Brian Bullins, Elad Hazan
- ArXiv
- 2016

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods… (More)

- Sanjeev Arora, Elad Hazan, Satyen Kale
- 46th Annual IEEE Symposium on Foundations of…
- 2005

Semidefinite programming (SDP) relaxations appear in many recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangian-relaxation based technique (modified from the papers of Plotkin, Shmoys, and Tardos (PST), and Klein and Lu) to derive faster algorithms for approximately… (More)