#### Filter Results:

#### Publication Year

1995

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

This paper describes the employment of an 'Adaptive Growing Three-Dimensional Self-Organizing Map' for the classification of images. First a short description of growing SOMs is given and the fundamental advantages are mentioned. Then an extension of the original SOM from two to three dimensions with growing feature is presented. By means of some selected… (More)

- UDO SEIFFERT
- 1999

Backpropagation is the standard training procedure for Multiple Layer Perceptron networks. It is based on the gradient descent to minimize the network error. However, using the gradient descent algorithm leads to some problems with the convergence of the training at all and to restrictions concerning applicable transfer functions as well. This paper… (More)

It seems to be an everlasting discussion. Spending a lot of additional time and extra money to implement a particular algorithm on parallel hardware is often considered as the ultimate solution to all existing time problems for the ones-and the most silly waste of time for the others. In fact, there are many pros and cons, which should be always… (More)

Multiple Layer Perceptron networks trained with backpropagation algorithm are very frequently used to solve a wide variety of real-world problems. Usually a gradient descent algorithm is used to adapt the weights based on a comparison between the desired and actual network response to a given input stimulus. All training pairs, each consisting of input… (More)

Multidimensional Scaling (MDS) is a powerful dimension reduction technique for embedding high-dimensional data into a low-dimensional target space. Thereby, the distance relationships in the source are reconstructed in the target space as best as possible according to a given embedding criterion. Here, a new stress function with intuitive properties and a… (More)

A correlation-based similarity measure is derived for generalized relevance learning vector quantization (GRLVQ). The resulting GRLVQ-C classifier makes Pearson correlation available in a classification cost framework where data prototypes and global attribute weighting terms are adapted into directions of minimum cost function values. In contrast to the… (More)

Neural processing of large-scale data sets containing both many input/output variables and a large number of training examples often leads to very large networks. Once these networks become large-scale in the truest sense of the word (several ten thousand weights), two major inconveniences -or possibly a little more than that -occur: (1) conventional… (More)

In this work we introduce a method for visualization of fuzzy label information obtained from prototype based fuzzy labeled self-organizing map (FLSOM) for fuzzy classification. FLSOM returns vectors of fuzyy class labels for the prototypes containing class simlarity information. This information is used for apropriate visualization by an adequate,… (More)