• Publications
  • Influence
Online Passive-Aggressive Algorithms
TLDR
This work presents a unified view for online classification, regression, and uni-class problems, and proves worst case loss bounds for various algorithms for both the realizable case and the non-realizable case. Expand
Optimal Distributed Online Prediction Using Mini-Batches
TLDR
This work presents the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms that is asymptotically optimal for smooth convex loss functions and stochastic inputs and proves a regret bound for this method. Expand
Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback
TLDR
The multi-point bandit setting, in which the player can query each loss function at multiple points, is introduced, and regret bounds that closely resemble bounds for the full information case are proved. Expand
Online Bandit Learning against an Adaptive Adversary: from Regret to Policy Regret
TLDR
This work argues that the standard definition of regret becomes inadequate if the adversary is allowed to adapt to the online algorithm's actions, and defines the alternative notion of policy regret, which attempts to provide a more meaningful way to measure anOnline algorithm's performance against adaptive adversaries. Expand
Online Learning with Feedback Graphs: Beyond Bandits
TLDR
This work analyzes how the structure of the feedback graph controls the inherent difficulty of the induced $T$-round learning problem and shows how the regret is affected if the graphs are allowed to vary with time. Expand
The Forgetron: A Kernel-Based Perceptron on a Budget
TLDR
This paper presents the Forgetron family of kernel-based online classification algorithms, which overcome the problem of growing unboundedly the amount of memory required to store the online hypothesis, by restricting themselves to a predefined memory budget. Expand
Log-Linear Models for Label Ranking
TLDR
This work presents a general boosting-based learning algorithm for the label ranking problem and proves a lower bound on the progress of each boosting iteration. Expand
Large margin hierarchical classification
We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted treeExpand
The Forgetron: A Kernel-Based Perceptron on a Fixed Budget
TLDR
This work presents and analyzes the Forgetron algorithm, the first online learning algorithm which maintains a strict limit on the number of examples it stores while, on the other hand, entertains a relative mistake bound. Expand
Vox Populi: Collecting High-Quality Labels from a Crowd
TLDR
This paper studies the problem of pruning low-quality teachers in a crowd, in order to improve the label quality of the training set, and shows that this is in fact achievable with a simple and efficient algorithm, which does not require that each example be repeatedly labeled by multiple teachers. Expand
...
1
2
3
4
5
...