#### Filter Results:

- Full text PDF available (27)

#### Publication Year

1994

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Mark Herbster, Manfred K. Warmuth
- ICML
- 1995

We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the… (More)

- Mark Herbster, Manfred K. Warmuth
- Journal of Machine Learning Research
- 2001

In most on-line learning research the total on-line loss of the algorithm is compared to the total loss of the best o¬-line predictor u from a comparison class of predictors. We call such bounds static bounds. The interesting feature of these bounds is that they hold for an arbitrary sequence of examples. Recently some work has been done where the predictor… (More)

- Mark Herbster, Guy Lever, Massimiliano Pontil
- NIPS
- 2008

We continue our study of online prediction of the labelling of a graph. We show a fundamental limitation of Laplacian-based algorithms: if the graph has a large diameter then the number of mistakes made by such algorithms may be proportional to the square root of the number of vertices, even when tackling simple problems. We overcome this drawback by means… (More)

- Mark Herbster, Massimiliano Pontil, Lisa Wainer
- ICML
- 2005

We apply classic online learning techniques similar to the perceptron algorithm to the problem of learning a function defined on a graph. The benefit of our approach includes simple algorithms and performance guarantees that we naturally interpret in terms of structural properties of the graph, such as the algebraic connectivity or the diameter of the… (More)

Given an n-vertex weighted tree with structural diameter S and a subset of m ver-tices, we present a technique to compute a corresponding m × m Gram matrix of the pseudoinverse of the graph Laplacian in O(n + m 2 + mS) time. We discuss the application of this technique to fast label prediction on a generic graph. We approximate the graph with a spanning… (More)

- Mark Herbster, Guy Lever
- COLT
- 2009

We study the problem of predicting the labelling of a graph. The graph is given and a trial sequence of (vertex,label) pairs is then incrementally revealed to the learner. On each trial a vertex is queried and the learner predicts a boolean label. The true label is then returned. The learner's goal is to min-imise mistaken predictions. We propose minimum… (More)

- Mark Herbster, Massimiliano Pontil
- NIPS
- 2006

We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We address both label noise and concept noise. Graph learning is framed as an instance of prediction on a finite set. To treat label noise we show that the hinge loss bounds derived by Gentile [1] for online perceptron learning can be transformed to relative… (More)

- Mark Herbster
- ALT
- 2008

Given an n vertex weighted tree with (structural) diameter S G and a set of vertices we give a method to compute the corresponding × Gram matrix of the pseudoinverse of the graph Laplacian in O(n + 2 S G) time. We discuss the application of this method to predicting the labeling of a graph. Preliminary experimental results on a digit classification task are… (More)

- Andreas Argyriou, Mark Herbster, Massimiliano Pontil
- NIPS
- 2005

A foundational problem in semi-supervised learning is the construction of a graph underlying the data. We propose to use a method which optimally combines a number of differently constructed graphs. For each of these graphs we associate a basic graph kernel. We then compute an optimal combined kernel. This kernel solves an extended regulariza-tion problem… (More)