Efficient Kernelized Prototype Based Classification

@article{Schleif2011EfficientKP,
  title={Efficient Kernelized Prototype Based Classification},
  author={Frank-Michael Schleif and Thomas Villmann and Barbara Hammer and Petra Schneider},
  journal={International journal of neural systems},
  year={2011},
  volume={21 6},
  pages={
          443-57
        }
}
Prototype based classifiers are effective algorithms in modeling classification problems and have been applied in multiple domains. While many supervised learning algorithms have been successfully extended to kernels to improve the discrimination power by means of the kernel concept, prototype based classifiers are typically still used with Euclidean distance measures. Kernelized variants of prototype based classifiers are currently too complex to be applied for larger data sets. Here we… 

Figures and Tables from this paper

Border-Sensitive Learning in Kernelized Learning Vector Quantization

TLDR
The application of kernel distances in LVQ is proposed such that the LVQ-algorithm can handle the data in a topologically equivalent data space compared to the feature mapping space in SVMs.

Border-Sensitive Learning in Kernelized Learning

TLDR
The application of kernel distances in LVQ is proposed such that the LVQ-algorithm can handle the data in a topologically equivalent data space compared to the feature mapping space in SVMs.

Gradient Based Learning in Vector Quantization Using Differentiable Kernels

TLDR
The mathematical justification is given that gradient based learning in prototype-based vector quantization is possible by means of kernel metrics instead of the standard Euclidean distance, and that an appropriate handling requires differentiable universal kernels defining the kernel metric.

Border-sensitive learning in generalized learning vector quantization: an alternative to support vector machines

TLDR
Two modifications of LVQ are proposed to make it comparable to SVM: first border-sensitive learning is introduced to achieve border-responsible prototypes comparable with support vectors in SVM, and kernel distances for differentiable kernels are considered, such that prototype learning takes place in a metric space isomorphic to the feature mapping space of SVM.

Aspects in Classification Learning - Review of Recent Developments in Learning Vector Quantization

TLDR
Recent extensions and modifications of the basic learning vector quantization algorithm are highlighted and also discussed in relation to particular classification task scenarios like imbalanced and/or incomplete data, prior data knowledge, classification guarantees or adaptive data metrics for optimal classification.

Kernelized vector quantization in gradient-descent learning

Learning vector quantization for (dis-)similarities

Probabilistic Prototype Classification Using t-norms

TLDR
A generalization of Multivariate Robust Soft Learning Vector Quantization that employs t-norms, known from fuzzy learning and fuzzy set theory, in the class label assignments, leading to a more flexible model with respect to domain requirements.

References

SHOWING 1-10 OF 42 REFERENCES

A novel kernel prototype-based learning algorithm

  • A. K. QinP. Suganthan
  • Computer Science
    Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.
  • 2004
We propose a novel kernel prototype-based learning algorithm, called kernel generalized learning vector quantization (KGLYQ) algorithm, which can significantly improve the classification performance

Generalized Derivative Based Kernelized Learning Vector Quantization

We derive a novel derivative based version of kernelized Generalized Learning Vector Quantization (KGLVQ) as an effective, easy to interpret, prototype based and kernelized classifier. It is called

Soft Learning Vector Quantization

TLDR
This work derives two variants of LVQ using a gaussian mixture ansatz, proposes an objective function based on a likelihood ratio and derive a learning rule using gradient descent and provides a way to extend the algorithms of the LVQ family to different distance measure.

Margin Analysis of the LVQ Algorithm

TLDR
This paper presents margin based generalization bounds that suggest that prototypes based classifiers can be more accurate then the 1-NN rule and derived a training algorithm that selects a good set of prototypes using large margin principles.

Relevance LVQ versus SVM

TLDR
GRLVQ is discussed in comparison to the SVM and its beneficial theoretical properties which are similar to SVM are pointed out, whereby providing sparse and intuitive solutions.

Building Support Vector Machines with Reduced Classifier Complexity

TLDR
A primal method that decouples the idea of basis functions from the concept of support vectors and greedily finds a set of kernel basis functions of a specified maximum size to approximate the SVM primal cost function well.

Large-Scale Maximum Margin Discriminant Analysis Using Core Vector Machines

TLDR
Comparisons with the original MMDA, KPCA, and KFD on a number of large data sets show that the proposed feature extractor can improve classification accuracy, and is also faster than these kernel-based methods by over an order of magnitude.

Adaptive Relevance Matrices in Learning Vector Quantization

We propose a new matrix learning scheme to extend relevance learning vector quantization (RLVQ), an efficient prototype-based classification algorithm, toward a general adaptive metric. By