Learn More
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue(More)
We present a new self-organizing neural network model having two variants. The rst variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically nd a suitable network structure and(More)
The reasons to use growing self-organizing networks are investigated. First an overview of several models of this kind is given are they are related to other approaches. Then two examples are presented to illustrate the speciic properties and advantages of incremental networks. In each case a non-incremental model is used for comparison purposes. The rst(More)
We present a novel self-organizing network which is generated by a growth process. The application range of the model is the same as for Kohonen's feature map: generation of topology-preserving and dimensionality-reducing mappings, e.g., for the purpose of data visualization. The network structure is a rectangular grid which, however, increases its size(More)
A new vector quantization method { denoted LBG-U { is presented which is closely related to a particular class of neural network models (growing self-organizing networks). LBG-U consists mainly of repeated runs of the well-known LBG algorithm. Each time LBG has converged, however, a novel measure of utility is assigned to each codebook vector. Thereafter,(More)
We present a new incremental radial basis function network suitable for classiication and regression problems. Center positions are continuously updated through soft competitive learning. The width of the radial basis functions is derived from the distance to topological neighbors. During the training the observed error is accumulated locally and used to(More)