#### Filter Results:

- Full text PDF available (28)

#### Publication Year

2011

2017

- This year (3)
- Last 5 years (27)
- Last 10 years (29)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Zohar S. Karnin, Tomer Koren, Oren Somekh
- ICML
- 2013

We study the problem of exploration in stochastic Multi-Armed Bandits. Even in the simplest setting of identifying the best arm, there remains a logarithmic multiplicative gap between the known lower and upper bounds for the number of arm pulls required for the task. This extra logarithmic factor is quite meaningful in nowadays large-scale applications. We… (More)

- Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel, Tomer Koren
- COLT
- 2015

We study a general class of online learning problems where the feedback is specified by a graph. This class includes online prediction with expert advice and the multiarmed bandit problem, but also several learning problems where the online player does not necessarily observe his own loss. We analyze how the structure of the feedback graph controls the… (More)

- Tomer Koren, Kfir Y. Levy
- NIPS
- 2015

We consider Empirical Risk Minimization (ERM) in the context of stochastic optimization with exp-concave and smooth losses—a general optimization framework that captures several important learning problems including linear and logistic regression, learning SVMs with the squared hinge-loss, portfolio selection and more. In this setting, we establish the… (More)

- Elad Hazan, Tomer Koren, Nathan Srebro
- NIPS
- 2011

<lb>We present an optimization approach for linear SVMs based on a stochastic<lb>primal-dual approach, where the primal step is akin to an importance-weighted<lb>SGD, and the dual step is a stochastic update on the importance weights. This<lb>yields an optimization method with a sublinear dependence on the training set<lb>size, and the first method for… (More)

- Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres
- STOC
- 2014

We study the adversarial multi-armed bandit problem in a setting where the player incurs a unit cost each time he switches actions. We prove that the player's <i>T</i>-round minimax regret in this setting is [EQUATION], thereby closing a fundamental gap in our understanding of learning with bandit feedback. In the corresponding full-information version of… (More)

- Aharon Ben-Tal, Elad Hazan, Tomer Koren, Shie Mannor
- Operations Research
- 2015

Robust optimization is a common framework in optimization under uncertainty when the problem parameters are not known, but it is rather known that the parameters belong to some given uncertainty set. In the robust optimization framework the problem solved is a min-max problem where a solution is judged according to its performance on the worst possible… (More)

- Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour
- COLT
- 2016

We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown d-dimensional subspace. We devise algorithms with regret bounds that are independent of the number of experts and depend only on the rank d. For the stochastic model we show a tight bound of Θp ? dT q, and… (More)

- Elad Hazan, Tomer Koren
- ICML
- 2012

We consider the most common variants of linear regression, including Ridge, Lasso and Support-vector regression, in a setting where the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple and efficient algorithms for these problems: for Lasso and Ridge regression they need the same total number… (More)

- Elad Hazan, Tomer Koren
- Math. Program.
- 2016

We consider the fundamental problem of maximizing a general quadratic function over an ellipsoidal domain, also known as the trust region problem. We give the first provable linear-time (in the number of non-zero entries of the input) algorithm for approximately solving this problem. Specifically, our algorithm returns an ǫ-approximate solution in time Õ(N/… (More)

- Tomer Koren
- COLT
- 2013

Stochastic exp-concave optimization is an important primitive in machine learning that captures several fundamental problems, including linear regression, logistic regression and more. The exp-concavity property allows for fast convergence rates, as compared to general stochastic optimization. However, current algorithms that attain such rates scale poorly… (More)