Asymptotic Level Density of the Elastic Net Self-Organizing Feature Map

Abstract

Whileas the Kohonen Self Organizing Map shows an asymptotic level density following a power law with a magnification exponent 2/3, it would be desired to have an exponent 1 in order to provide optimal mapping in the sense of information theory. In this paper, we study analytically and numerically the magnification behaviour of the Elastic Net algorithm as a model for self-organizing feature maps. In contrast to the Kohonen map the Elastic Net shows no power law, but for onedimensional maps nevertheless the density follows an universal magnification law, i.e. depends on the local stimulus density only and is independent on position and decouples from the stimulus density at other positions. Self Organizing Feature Maps map an input space, such as the retina or skin receptor fields, into a neural layer by feedforward structures with lateral inhibition. Biological maps show as defining properties topology preservation, error tolerance, plasticity (the ability of adaptation to changes in input space), and self-organized formation by a local process, since the global structure cannot be coded genetically. The self-organizing feature map algorithm proposed by Kohonen [1] has become a successful model for topology preserving primary sensory processing in the cortex [2], and an useful tool in technical applications [3]. The Kohonen algorithm for Self Organizing Feature Maps is defined as follows: Every stimulus v of an euclidian input space V is mapped to the neuron with the position s in the neural layer R with the highest neural activity, given by the condition |ws − v| = minr∈R |wr − v| (1) where |.| denotes the euclidian distance in input space. In the Kohonen model the learning rule for each synaptic weight vector wr is given by w new r = w r + η · grs · (v − w r ) (2) with grs as a gaussian function of euclidian distance |r − s| in the neural layer. The function grs describes the topology in the neural layer. The parameter η determines the speed of learning and can be adjusted during the learning process. Topology preservation is enforced by the common update of all weight vectors whose neuron r is adjacent to the center of excitation s.

DOI: 10.1007/3-540-46084-5_152

Extracted Key Phrases

Cite this paper

@inproceedings{Claussen2002AsymptoticLD, title={Asymptotic Level Density of the Elastic Net Self-Organizing Feature Map}, author={Jens Christian Claussen and Heinz G. Schuster}, booktitle={ICANN}, year={2002} }