Unsupervised and supervised data clustering with competitive neural networks

@article{Buhmann1992UnsupervisedAS,
  title={Unsupervised and supervised data clustering with competitive neural networks},
  author={Joachim M. Buhmann and Helmuth Kuhnel},
  journal={[Proceedings 1992] IJCNN International Joint Conference on Neural Networks},
  year={1992},
  volume={4},
  pages={796-801 vol.4}
}
  • J. Buhmann, H. Kuhnel
  • Published 7 June 1992
  • Computer Science
  • [Proceedings 1992] IJCNN International Joint Conference on Neural Networks
The authors discuss objective functions for unsupervised and supervised data clustering and the respective competitive neural networks which implement these clustering algorithms. They propose a cost function for unsupervised and supervised data clustering which comprises distortion costs, complexity costs and supervision costs. A maximum entropy estimation of the clustering cost function yields an optimal number of clusters, their positions and their cluster probabilities. A three-layer neural… 
Complexity Optimized Data Clustering by Competitive Neural Networks
TLDR
This work discusses a clustering strategy that explicitly reflects the tradeoff between simplicity and precision of a data representation, and establishes a unifying framework for different clustering methods like K-means clustering, fuzzy clusters, entropy constrained vector quantization, or topological feature maps and competitive neural networks.
Unsupervised NN and graph matching approach to compare data sets
We describe a technique to compare two data partitions of two different data sets as frequently occurs in defect detection. The comparison is obtained dividing each data set in partitions by means of
Self-selective clustering of training data using the maximally-receptive classifier/regression bank
TLDR
An alternate method of training is proposed that lets a layered perceptron in a classifier bank choose the cluster of inputs it processes on the basis of the perceptron's ability to successfully classify those inputs.
A lateral contribution learning algorithm for multi MLP architecture
TLDR
A cooperative learning procedure allows to train the NNs in such a way that NNs belonging to narrow regions can participate to and improve the local convergence.
Redundancy reduction in environmental data set by means of an unsupervised neural networks
TLDR
It is shown that the validation process allows a correct identification of corrupted and/or anomalous data, comparable with the human validation, and allows a considerable reduction of transmitted data as the compression process profits the local processing of redundant data.
Vector quantization with complexity costs
TLDR
The approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained vector quantization or topological feature maps, and competitive neural networks.
An on-line learning algorithm for the orthogonal weight estimation of MLP
TLDR
An on-line learning algorithm for Multi Layered Perceptrons with an Orthogonal Weight Estimator (OWE) architecture that allows to dynamically and efficiently estimate the weights of a MLP in context dependent behaviour problems.
Bibliography of Self-Organizing Map (SOM) Papers: 1981-1997
TLDR
A comprehensive list of papers that use the Self-Organizing Map algorithms, have bene ted from them, or contain analyses of them is collected and provided both a thematic and a keyword index to help find articles of interest.
Neural Networks Application on Human Skin Biophysical Impedance Characterizations
TLDR
A set of artificial neural networks are used for classifying the human skin biophysical impedance data and it is shown that this mapping mimics the signal processing in biological neural networks.
Integration of context in process models used for neuro-control
  • N. Pican, F. Alexandre
  • Computer Science
    Proceedings of IEEE Systems Man and Cybernetics Conference - SMC
  • 1993
TLDR
A new architectural model is proposed to deal with applications where the input space is very complex and high-dimensional, inspired from a classical connectionist approach and avoids saturation due to the use of another connectionist structure, for memory storage.
...
1
2
...

References

SHOWING 1-10 OF 10 REFERENCES
Complexity Optimized Data Clustering by Competitive Neural Networks
TLDR
This work discusses a clustering strategy that explicitly reflects the tradeoff between simplicity and precision of a data representation, and establishes a unifying framework for different clustering methods like K-means clustering, fuzzy clusters, entropy constrained vector quantization, or topological feature maps and competitive neural networks.
Minimum class entropy: A maximum information approach to layered networks
TLDR
A new measure for the performance of hidden units as well as output units is proposed, called conditional class entropy, which not only allows existing networks to be judged but is also the basis of a new training algorithm with which an optimum number of neurons with optimum connecting weights can be found.
Feature discovery by competitive learning
TLDR
This paper shows how a set of feature detectors which capture important aspects of the set of stimulus input patterns are discovered and how these feature detectors form the basis of a multilayer system that serves to learn categorizations of stimulus sets which are not linearly separable.
Entropy-constrained vector quantization
TLDR
An iterative descent algorithm based on a Lagrangian formulation for designing vector quantizers having minimum distortion subject to an entropy constraint is discussed and it is shown that for clustering problems involving classes with widely different priors, the ECVQ outperforms the k-means algorithm in both likelihood and probability of error.
Hierarchical vector quantisation
TLDR
A method of vector quantisation which trades off accuracy for speed of encoding is presented, which finds that there is little loss in encoding accuracy, when compared with the exact nearest neighbour encoding using an equivalent single stage encoder.
An Algorithm for Vector Quantizer Design
An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data. The basic properties of the
Self-Organization and Associative Memory
TLDR
The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Vector quantization
  • R. Gray
  • Computer Science
    IEEE ASSP Magazine
  • 1984
TLDR
During the past few years several design algorithms have been developed for a variety of vector quantizers and the performance of these codes has been studied for speech waveforms, speech linear predictive parameter vectors, images, and several simulated random processes.
Information Theory and Statistical Mechanics
Treatment of the predictive aspect of statistical mechanics as a form of statistical inference is extended to the density-matrix formalism and applied to a discussion of the relation between
Statistical mechanics and phase transitions in clustering.