Applications of Regularized Least Squares to Classification Problems

@inproceedings{CesaBianchi2004ApplicationsOR,
  title={Applications of Regularized Least Squares to Classification Problems},
  author={Nicol{\`o} Cesa-Bianchi},
  booktitle={ALT},
  year={2004}
}
We present a survey of recent results concerning the theoretical and empirical performance of algorithms for learning regularized least-squares classifiers. The behavior of these family of learning algorithms is analyzed in both the statistical and the worst-case (individual sequence) data-generating models. 

Parallel randomized sampling for support vector machine (SVM) and support vector regression (SVR)

TLDR
It is proved that the proposed PRSVM and PRSVR algorithms achieve an average convergence rate that is so far the fastest bounded convergence rate, among all SVM decomposition training algorithms to the best of the authors' knowledge.

References

SHOWING 1-10 OF 15 REFERENCES

On the generalization ability of on-line learning algorithms

TLDR
This paper proves tight data-dependent bounds for the risk of this hypothesis in terms of an easily computable statistic M/sub n/ associated with the on-line performance of the ensemble, and obtains risk tail bounds for kernel perceptron algorithms interms of the spectrum of the empirical kernel matrix.

Margin-Based Algorithms for Information Filtering

TLDR
An information filtering model where the relevance labels associated to a sequence of feature vectors are realizations of an unknown probabilistic linear function is studied and a general filtering rule is derived based on the margin of a ridge regression estimator.

Regret Bounds for Hierarchical Classification with Linear-Threshold Functions

TLDR
An incremental algorithm using a linear-threshold classifier at each node of the taxonomy is introduced using a hierachical and parametric data model and is proved to prove a bound on the probability that the algorithm guesses the wrong multilabel for a random instance when the true model parameters are known.

Worst-Case Analysis of Selective Sampling for Linear-Threshold Algorithms

TLDR
A worst-case analysis of selective sampling algorithms for learning linear threshold functions and finds that Perceptron-like algorithms tend to perform as good as the algorithms receiving the true label after each classification, while observing in practice substantially fewer labels.

Learning Probabilistic Linear-Threshold Classifiers via Selective Sampling

TLDR
This paper investigates selective sampling, a learning model where the learner observes a sequence of i.i.d. unlabeled instances each time deciding whether to query the label of the current instance, and introduces a new selective sampling rule that can learn nonlinear probabilistic functions via the kernel machinery.

Learning with kernels

TLDR
This book is intended to be a guide to the art of self-consistency and should not be used as a substitute for a comprehensive guide to self-confidence.

A Second-Order Perceptron Algorithm

TLDR
A refined version of the second-order Perceptron algorithm which adaptively sets the value of a parameter and is able to prove mistake bounds corresponding to a nearly optimal constant setting of the parameter.

The perceptron: a probabilistic model for information storage and organization in the brain.

TLDR
This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory.

ON CONVERGENCE PROOFS FOR PERCEPTRONS

The perceptron: a model for brain functioning. I