#### Filter Results:

- Full text PDF available (20)

#### Publication Year

1991

2017

- This year (1)
- Last 5 years (1)
- Last 10 years (1)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Bernd Fritzke
- NIPS
- 1994

An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue… (More)

- Bernd Fritzke
- Neural Networks
- 1994

Alrstract-We present a new self-organizing neural network model that has two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches ( e.g., the Kohonen feature map) is the ability o f the model to automatically find a suitable network… (More)

- Bernd Fritzke
- Neural Processing Letters
- 1995

We present a novel self-organizing network which is generated by a growth process. The application range of the model is the same as for Kohonen’s feature map: generation of topology-preserving and dimensionality-reducing mappings, e.g., for the purpose of data visualization. The network structure is a rectangular grid which, however, increases its size… (More)

- Bernd Fritzke
- 1997

This report has the purpose of describing several algorithms from the literature all related to competitive learning. A uniform terminology is used for all methods. Moreover, identical examples are provided to allow a qualitative comparisons of the methods. The on-line version1 of this document contains hyperlinks to Java implementations of several of the… (More)

- Bernd Fritzke
- ICANN
- 1997

A new on-line criterion for identifying \useless" neurons of a self-organizing network is proposed. When this criterion is used in the context of the (formerly developed) growing neural gas model to guide deletions of units, the resulting method is able to closely track nonstationary distributions. Slow changes of the distribution are handled by adaptation… (More)

- Bernd Fritzke
- Neural Processing Letters
- 1994

We present a new algorithm for the construction of radial basis function (RBF) networks. The method uses accumulated error information to determine where to insert new units. The diameter of the localized units is chosen based on the mutual distances of the units. To have the distance information always available, it is held up-to-date by a Hebbian learning… (More)

- Bernd Fritzke
- ESANN
- 1996

The reasons to use growing self-organizing networks are investigated. First an overview of several models of this kind is given are they are related to other approaches. Then two examples are presented to illustrate the speci c properties and advantages of incremental networks. In each case a non-incremental model is used for comparison purposes. The rst… (More)

- Bernd Fritzke
- 1995

A new incremental network model for supervised learning is proposed. The model builds up a structure of units each of which has an associated local linear mapping (LLM). Error information obtained during training is used to determine where to insert new units whose LLMs are interpolated from their neighbors. Simulation results for several classiication… (More)

- Bernd Fritzke
- Neural Processing Letters
- 1997

A new vector quantization method (LBG-U) closely related to a particular class of neural network models (growing self-organizing networks) is presented. LBG-U consists mainly of repeated runs of the well-known LBG algorithm. Each time LBG converges, however, a novel measure of utility is assigned to each codebook vector. Thereafter, the vector with minimum… (More)

- Bernd Fritzke
- NIPS
- 1993

We present a new incremental radial basis function network suitable for classification and regression problems. Center positions are continuously updated through soft competitive learning. The width of the radial basis functions is derived from the distance to topological neighbors. During the training the observed error is accumulated locally and used to… (More)