#### Filter Results:

- Full text PDF available (48)

#### Publication Year

2002

2017

- This year (4)
- Last 5 years (19)
- Last 10 years (38)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Shai Shalev-Shwartz, Koby Crammer, Ofer Dekel, Yoram Singer
- NIPS
- 2003

We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the non-realizable case. A conversion of our main online algorithm to the setting of batch learning is also… (More)

- Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, Lin Xiao
- Journal of Machine Learning Research
- 2012

Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a… (More)

- Ofer Dekel, Joseph Keshet, Yoram Singer
- ICML
- 2004

We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree which induces a metric over the label set. Our approach combines ideas from large margin kernel methods and Bayesian analysis. Following the large margin principle,… (More)

- Ofer Dekel, Shai Shalev-Shwartz, Yoram Singer
- SIAM J. Comput.
- 2008

The Perceptron algorithm, despite its simplicity, often performs well in online classification tasks. The Perceptron becomes especially effective when it is used in conjunction with kernel functions. However, a common difficulty encountered when implementing kernel-based online algorithms is the amount of memory required to store the online hypothesis,… (More)

- Alekh Agarwal, Ofer Dekel, Lin Xiao
- COLT
- 2010

Bandit convex optimization is a special case of online convex optimization with partial information. In this setting, a player attempts to minimize a sequence of adversarially generated convex loss functions, while only observing the value of each function at a single point. In some cases, the minimax regret of these problems is known to be strictly worse… (More)

- Ofer Dekel, Christopher D. Manning, Yoram Singer
- NIPS
- 2003

Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferences over the label-set, however we do not assume that this list… (More)

- Ofer Dekel, Shai Shalev-Shwartz, Yoram Singer
- NIPS
- 2005

The Perceptron algorithm, despite its simplicity, often performs well on online classification tasks. The Perceptron becomes especially effective when it is used in conjunction with kernels. However, a common difficulty encountered when implementing kernel-based online algorithms is the amount of memory required to store the online hypothesis, which may… (More)

- Ofer Dekel, Ambuj Tewari, Raman Arora
- ICML
- 2012

Online learning algorithms are designed to learn even when their input is generated by an adversary. The widely-accepted formal definition of an online algorithm’s ability to learn is the game-theoretic notion of regret. We argue that the standard definition of regret becomes inadequate if the adversary is allowed to adapt to the online algorithm’s actions.… (More)

- Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel, Tomer Koren
- COLT
- 2015

We study a general class of online learning problems where the feedback is specified by a graph. This class includes online prediction with expert advice and the multiarmed bandit problem, but also several learning problems where the online player does not necessarily observe his own loss. We analyze how the structure of the feedback graph controls the… (More)

- Ofer Dekel, Ohad Shamir
- Machine Learning
- 2008

After a classifier is trained using a machine learning algorithm and put to use in a real world system, it often faces noise which did not appear in the training data. Particularly, some subset of features may be missing or may become corrupted. We present two novel machine learning techniques that are robust to this type of classification-time noise.… (More)