Distance Learning in Discriminative Vector Quantization

@article{Schneider2009DistanceLI,
  title={Distance Learning in Discriminative Vector Quantization},
  author={Petra Schneider and Michael Biehl and Barbara Hammer},
  journal={Neural Computation},
  year={2009},
  volume={21},
  pages={2942-2969}
}
Abstract Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in… 

Learning vector quantization for proximity data

A novel extension of LVQ to similarity data which is based on the kernelization of an underlying probabilistic model: kernel robust soft LVQ (KRSLVQ), relying on the notion of a pseudo-Euclidean embedding of proximity data is proposed.

Learning vector quantization for (dis-)similarities

Regularization in Matrix Relevance Learning

A regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ) by extending the cost function by an appropriate regularization term prevents the unfavorable behavior and can help to improve the generalization ability.

Border-sensitive learning in generalized learning vector quantization: an alternative to support vector machines

Two modifications of LVQ are proposed to make it comparable to SVM: first border-sensitive learning is introduced to achieve border-responsible prototypes comparable with support vectors in SVM, and kernel distances for differentiable kernels are considered, such that prototype learning takes place in a metric space isomorphic to the feature mapping space of SVM.

University of Groningen Advanced methods for prototype-based classification

A regularization technique to extend recently proposed matrix learning schemes in Learning Vector Quantization (LVQ) by extending the cost function by an appropriate regularization term prevents the unfavorable behavior and can help to improve the generalization ability.

Divergence-based classification in learning vector quantization

Efficient Approximations of Kernel Robust Soft LVQ

This contribution investigates two approximation schemes which lead to sparse models: k-approximations of the prototypes and the Nystrom approximation of the Gram matrix.

Hyperparameter learning in probabilistic prototype-based models

...

References

SHOWING 1-10 OF 24 REFERENCES

Adaptive Relevance Matrices in Learning Vector Quantization

We propose a new matrix learning scheme to extend relevance learning vector quantization (RLVQ), an efficient prototype-based classification algorithm, toward a general adaptive metric. By

Soft Learning Vector Quantization

This work derives two variants of LVQ using a gaussian mixture ansatz, proposes an objective function based on a likelihood ratio and derive a learning rule using gradient descent and provides a way to extend the algorithms of the LVQ family to different distance measure.

Regularization in Matrix Relevance Learning

A regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ) by extending the cost function by an appropriate regularization term prevents the unfavorable behavior and can help to improve the generalization ability.

Distance Metric Learning for Large Margin Nearest Neighbor Classification

This paper shows how to learn a Mahalanobis distance metric for kNN classification from labeled examples in a globally integrated manner and finds that metrics trained in this way lead to significant improvements in kNN Classification.

Generalized relevance learning vector quantization

Relevance determination in Learning Vector Quantization

The method is based on Hebbian learning and introduces weighting factors of the input dimensions which are automatically adapted to the speci c problem and obtains a possibly more eAEcient classi cation and insight to the role of the data dimensions.

Dynamic Hyperparameter Scaling Method for LVQ Algorithms

  • Sambu SeoK. Obermayer
  • Computer Science
    The 2006 IEEE International Joint Conference on Neural Network Proceedings
  • 2006
The relationship between values assigned to the hyperparameters, the on-line learning process, and the structure of the resulting classifier are analyzed, and an annealing method is suggested, where each hyperparameter is initially set to a large value and is then slowly decreased during learning.

Margin Analysis of the LVQ Algorithm

This paper presents margin based generalization bounds that suggest that prototypes based classifiers can be more accurate then the 1-NN rule and derived a training algorithm that selects a good set of prototypes using large margin principles.