Online Passive-Aggressive Algorithms
- K. Crammer, O. Dekel, Joseph Keshet, S. Shalev-Shwartz, Y. Singer
- Computer ScienceJournal of machine learning research
- 9 December 2003
This work presents a unified view for online classification, regression, and uni-class problems, and proves worst case loss bounds for various algorithms for both the realizable case and the non-realizable case.
Optimal Distributed Online Prediction Using Mini-Batches
- O. Dekel, Ran Gilad-Bachrach, O. Shamir, Lin Xiao
- Computer ScienceJournal of machine learning research
- 6 December 2010
This work presents the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms that is asymptotically optimal for smooth convex loss functions and stochastic inputs and proves a regret bound for this method.
Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback
- Alekh Agarwal, O. Dekel, Lin Xiao
- Computer ScienceAnnual Conference Computational Learning Theory
- 1 June 2010
The multi-point bandit setting, in which the player can query each loss function at multiple points, is introduced, and regret bounds that closely resemble bounds for the full information case are proved.
Online Bandit Learning against an Adaptive Adversary: from Regret to Policy Regret
- O. Dekel, Ambuj Tewari, R. Arora
- Computer ScienceInternational Conference on Machine Learning
- 26 June 2012
This work argues that the standard definition of regret becomes inadequate if the adversary is allowed to adapt to the online algorithm's actions, and defines the alternative notion of policy regret, which attempts to provide a more meaningful way to measure anOnline algorithm's performance against adaptive adversaries.
Online Learning with Feedback Graphs: Beyond Bandits
- N. Alon, N. Cesa-Bianchi, O. Dekel, Tomer Koren
- Computer Science, MathematicsAnnual Conference Computational Learning Theory
- 26 February 2015
This work analyzes how the structure of the feedback graph controls the inherent difficulty of the induced $T$-round learning problem and shows how the regret is affected if the graphs are allowed to vary with time.
Adaptive Neural Networks for Efficient Inference
- Tolga Bolukbasi, Joseph Wang, O. Dekel, Venkatesh Saligrama
- Computer ScienceInternational Conference on Machine Learning
- 25 February 2017
It is shown that computational time can be dramatically reduced by exploiting the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples.
Log-Linear Models for Label Ranking
- O. Dekel, Christopher D. Manning, Y. Singer
- Computer ScienceNIPS
- 9 December 2003
This work presents a general boosting-based learning algorithm for the label ranking problem and proves a lower bound on the progress of each boosting iteration.
Large margin hierarchical classification
- O. Dekel, Joseph Keshet, Y. Singer
- Computer ScienceInternational Conference on Machine Learning
- 4 July 2004
We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree…
The Forgetron: A Kernel-Based Perceptron on a Budget
- O. Dekel, S. Shalev-Shwartz, Y. Singer
- Computer ScienceSIAM journal on computing (Print)
- 2008
This paper presents the Forgetron family of kernel-based online classification algorithms, which overcome the problem of growing unboundedly the amount of memory required to store the online hypothesis, by restricting themselves to a predefined memory budget.
The Forgetron: A Kernel-Based Perceptron on a Fixed Budget
- O. Dekel, S. Shalev-Shwartz, Y. Singer
- Computer ScienceNIPS
- 5 December 2005
This work presents and analyzes the Forgetron algorithm, the first online learning algorithm which maintains a strict limit on the number of examples it stores while, on the other hand, entertains a relative mistake bound.
...
...