#### Filter Results:

#### Publication Year

1998

2013

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Bernhard Schölkopf, Sebastian Mika, +4 authors Alexander J. Smola
- IEEE Trans. Neural Networks
- 1999

This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods.… (More)

Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in… (More)

- Klaus-Robert Müller, Sebastian Mika, Gunnar Rätsch, Koji Tsuda, Bernhard Schölkopf
- IEEE Trans. Neural Networks
- 2001

This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and… (More)

- Jihun Ham, Daniel D. Lee, Sebastian Mika, Bernhard Schölkopf
- ICML
- 2004

We interpret several well-known algorithms for dimensionality reduction of manifolds as kernel methods. Isomap, graph Laplacian eigenmap, and locally linear embedding (LLE) all utilize local neighborhood information to construct a global embedding of the manifold. We show how all three algorithms can be described as kernel PCA on specially constructed Gram… (More)

- Sebastian Mika, Gunnar Rätsch, Klaus-Robert Müller
- NIPS
- 2000

We investigate a new kernel–based classifier: the Kernel Fisher Discrim-inant (KFD). A mathematical programming formulation based on the observation that KFD maximizes the average margin permits an interesting modification of the original KFD algorithm yielding the sparse KFD. We find that both, KFD and the proposed sparse KFD, can be understood in an… (More)

- Bernhard Schh, Sebastian Mika, Alex Smola, Gunnar Rr Atsch, Klaus-Robert M Uller
- 1998

Algorithms based on Mercer kernels construct their solutions in terms of expansions in a high-dimensional feature space F. Previous work has shown that all algorithms which can be formulated in terms of dot products in F can be performed using a kernel without explicitly working in F. The list of such algorithms includes support vector machines and… (More)

- Dipl Inform Sebastian, Mika, +32 authors Sebastian Mika
- 2003

In this thesis we consider statistical learning problems and machines. A statistical learning machine tries to infer rules from a given set of examples such that it is able to make correct predictions on unseen examples. These predictions can for example be a classification or a regression. We consider the class of kernel based learning techniques. The main… (More)

- Alexander Zien, Gunnar Rätsch, Sebastian Mika, Bernhard Schölkopf, Thomas Lengauer, Klaus-Robert Müller
- Bioinformatics
- 1999

MOTIVATION
In order to extract protein sequences from nucleotide sequences, it is an important step to recognize points at which regions start that code for proteins. These points are called translation initiation sites (TIS).
RESULTS
The task of finding TIS can be modeled as a classification problem. We demonstrate the applicability of support vector… (More)

- Gunnar Rätsch, Sebastian Mika, Bernhard Schölkopf, Klaus-Robert Müller
- IEEE Trans. Pattern Anal. Mach. Intell.
- 2002

— We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithm — one-class leveraging — starting from the one-class support vector machine (1-SVM). This is a first step towards un-supervised… (More)

- Alexander J. Smola, Sebastian Mika, Bernhard Schölkopf, Robert C. Williamson
- Journal of Machine Learning Research
- 1999

Many settings of unsupervised learning can be viewed as quantization problems — the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for un-supervised settings. Moreover, this setting is very closely related to both principal… (More)