#### Filter Results:

- Full text PDF available (32)

#### Publication Year

1994

2018

- This year (5)
- Last 5 years (14)
- Last 10 years (22)

#### Publication Type

#### Co-author

#### Journals and Conferences

Learn More

- Mark Herbster, Manfred K. Warmuth
- Machine Learning
- 1995

We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The… (More)

- Mark Herbster, Manfred K. Warmuth
- Journal of Machine Learning Research
- 2001

In most on-line learning research the total on-line loss of the algorithm is compared to the total loss of the best o¬-line predictor u from a comparison class of predictors. We call such bounds… (More)

- Andreas Argyriou, Mark Herbster, Massimiliano Pontil
- NIPS
- 2005

U )V ./&!@/ 24 "&! "<5 / $ "W!<8 '#X24&Y #-28JL ./ Z $7 2[ '@-<4 ' $&/24&/=\2[ , / ] 3 "&/J : $ .5 ^ 24 "&; )K %=" / ;./&!@/ $<89 24&/=% / \@/ $ !_KE` / $ " Z ] -.! \ #3 ! @YB /2[ Y " / 24#"<8<49M 3… (More)

- Mark Herbster, Massimiliano Pontil, Lisa Wainer
- ICML
- 2005

We apply classic online learning techniques similar to the perceptron algorithm to the problem of learning a function defined on a graph. The benefit of our approach includes simple algorithms and… (More)

- Mark Herbster, Massimiliano Pontil
- NIPS
- 2006

We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We address both label noise and concept noise. Graph learning is framed as an instance of prediction on a… (More)

- Peter Auer, Mark Herbster, Manfred K. Warmuth
- NIPS
- 1995

We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function bas ed on the square loss can grow exponentially in the dimension.

- Mark Herbster
- ALT
- 2008

The nearest neighbor and the perceptron algorithms are intuitively motivated by the aims to exploit the “cluster” and “linear separation” structure of the data to be classified, respectively. We… (More)

- Mark Herbster, Guy Lever
- COLT
- 2009

We study the problem of predicting the labelling of a graph. The graph is given and a trial sequence of (vertex,label) pairs is then incrementally revealed to the learner. On each trial a vertex is… (More)

Given an n-vertex weighted tree with structural diameter S and a subset of m vertices, we present a technique to compute a corresponding m×m Gram matrix of the pseudoinverse of the graph Laplacian in… (More)

- Mark Herbster
- COLT/EuroCOLT
- 2001

We develop three new techniques to build on the recent advances in online learning with kernels. First, we show that an exponential speed-up in prediction time per trial is possible for such… (More)