This work presents a unified view for online classification, regression, and uni-class problems, and proves worst case loss bounds for various algorithms for both the realizable case and the non-realizable case.Expand

This work presents the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms that is asymptotically optimal for smooth convex loss functions and stochastic inputs and proves a regret bound for this method.Expand

The multi-point bandit setting, in which the player can query each loss function at multiple points, is introduced, and regret bounds that closely resemble bounds for the full information case are proved.Expand

This work argues that the standard definition of regret becomes inadequate if the adversary is allowed to adapt to the online algorithm's actions, and defines the alternative notion of policy regret, which attempts to provide a more meaningful way to measure anOnline algorithm's performance against adaptive adversaries.Expand

This work analyzes how the structure of the feedback graph controls the inherent difficulty of the induced $T$-round learning problem and shows how the regret is affected if the graphs are allowed to vary with time.Expand

This paper presents the Forgetron family of kernel-based online classification algorithms, which overcome the problem of growing unboundedly the amount of memory required to store the online hypothesis, by restricting themselves to a predefined memory budget.Expand

This work presents a general boosting-based learning algorithm for the label ranking problem and proves a lower bound on the progress of each boosting iteration.Expand

We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree… Expand

This work presents and analyzes the Forgetron algorithm, the first online learning algorithm which maintains a strict limit on the number of examples it stores while, on the other hand, entertains a relative mistake bound.Expand

This paper studies the problem of pruning low-quality teachers in a crowd, in order to improve the label quality of the training set, and shows that this is in fact achievable with a simple and efficient algorithm, which does not require that each example be repeatedly labeled by multiple teachers.Expand