From noise-free to noise-tolerant and from on-line to batch learning

@inproceedings{Klasner1995FromNT,
  title={From noise-free to noise-tolerant and from on-line to batch learning},
  author={Norbert Klasner and Hans Ulrich Simon},
  booktitle={COLT '95},
  year={1995}
}
A simple method is presented which, loosely speaking, virtually removes noise or misfit from data, and thereby converts a “noise-free” algorithm A, which on-line learns linear functions from data without noise or misfit, into a “noise-tolerant” algorithm Ant which learns linear functions from data containing noise or misfit. Given some technical conditions, this conversion preserves optimality. For instance, the optimal noise-free algorithm B of Bernstein from [3] is converted into an optimal… CONTINUE READING

Topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 16 CITATIONS

On the generalization of soft margin algorithms

VIEW 4 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

On the Generalisation of SoftMargin AlgorithmsJohn

  • Shawe-Taylor, Nello
  • 2000
VIEW 4 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

How to keep the HG weights non-negative: the truncated Perceptron reweighing rule

VIEW 1 EXCERPT
CITES BACKGROUND

A framework for estimating risk

VIEW 2 EXCERPTS
CITES METHODS

Online Learning for Complex Cat-egorial Problems

VIEW 1 EXCERPT
CITES METHODS