#### Filter Results:

- Full text PDF available (48)

#### Publication Year

2002

2017

- This year (2)
- Last 5 years (18)
- Last 10 years (36)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Shai Shalev-Shwartz, Koby Crammer, Ofer Dekel, Yoram Singer
- NIPS
- 2003

We present a unified view for online classification, regression, and uni-class problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the non-realizable case. A conversion of our main online algorithm to the setting of batch learning is also… (More)

- Alekh Agarwal, Ofer Dekel, Lin Xiao
- COLT
- 2010

Bandit convex optimization is a special case of online convex optimization with partial information. In this setting, a player attempts to minimize a sequence of adversarially generated convex loss functions, while only observing the value of each function at a single point. In some cases, the minimax regret of these problems is known to be strictly worse… (More)

- Ofer Dekel, Christopher D. Manning, Yoram Singer
- NIPS
- 2003

Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferences over the label-set, however we do not assume that this list… (More)

- Ofer Dekel, Shai Shalev-Shwartz, Yoram Singer
- SIAM J. Comput.
- 2008

The Perceptron algorithm, despite its simplicity, often performs well in online classification tasks. The Perceptron becomes especially effective when it is used in conjunction with kernel functions. However, a common difficulty encountered when implementing kernel-based on-line algorithms is the amount of memory required to store the online hypothesis,… (More)

- Ofer Dekel, Shai Shalev-Shwartz, Yoram Singer
- NIPS
- 2005

The Perceptron algorithm, despite its simplicity, often performs well on online classification tasks. The Perceptron becomes especially effective when it is used in conjunction with kernels. However, a common difficulty encountered when implementing kernel-based online algorithms is the amount of memory required to store the online hypothesis, which may… (More)

- Ofer Dekel, Ambuj Tewari, Raman Arora
- ICML
- 2012

Online learning algorithms are designed to learn even when their input is generated by an adversary. The widely-accepted formal definition of an online algorithm's ability to learn is the game-theoretic notion of regret. We argue that the standard definition of regret becomes inadequate if the adversary is allowed to adapt to the online algorithm's actions.… (More)

- Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, Lin Xiao
- Journal of Machine Learning Research
- 2012

Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a… (More)

- Ofer Dekel, Ohad Shamir
- ICML
- 2009

We consider a supervised machine learning scenario where labels are provided by a heterogeneous set of teachers, some of which are mediocre, incompetent, or perhaps even malicious. We present an algorithm, built on the SVM framework, that explicitly attempts to cope with low-quality and malicious teachers by decreasing their influence on the learning… (More)

- Ofer Dekel, Joseph Keshet, Yoram Singer
- ICML
- 2004

We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree which induces a metric over the label set. Our approach combines ideas from large margin kernel methods and Bayesian analysis. Following the large margin principle,… (More)

- Ofer Dekel, Ohad Shamir
- COLT
- 2009

With the emergence of search engines and crowd-sourcing websites, machine learning practitioners are faced with datasets that are labeled by a large heterogeneous set of teachers. These datasets test the limits of our existing learning theory, which largely assumes that data is sampled i.i.d. from a fixed distribution. In many cases, the number of teachers… (More)