Learn More
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue(More)
We present a new self-organizing neural network model having two variants. The rst variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically nd a suitable network structure and(More)
We present a novel self-organizing network which is generated by a growth process. The application range of the model is the same as for Kohonen's feature map: generation of topology-preserving and dimensionality-reducing mappings, e.g., for the purpose of data visualization. The network structure is a rectangular grid which, however, increases its size(More)
The reasons to use growing self-organizing networks are investigated. First an overview of several models of this kind is given are they are related to other approaches. Then two examples are presented to illustrate the speciic properties and advantages of incremental networks. In each case a non-incremental model is used for comparison purposes. The rst(More)
(Some additions and reenements are planned for this document so it will stay in the draft status still for a while.) Comments are welcome. Abstract This report has the purpose of describing several algorithms from the literature all related to competitive learning. A uniform terminology is used for all methods. Moreover, identical examples are provided to(More)
A new vector quantization method { denoted LBG-U { is presented which is closely related to a particular class of neural network models (growing self-organizing networks). LBG-U consists mainly of repeated runs of the well-known LBG algorithm. Each time LBG has converged, however, a novel measure of utility is assigned to each codebook vector. Thereafter,(More)
A new incremental network model for supervised learning is proposed. The model builds up a structure of units each of which has an associated local linear mapping (LLM). Error information obtained during training is used to determine where to insert new units whose LLMs are interpolated from their neighbors. Simulation results for several classiication(More)