Haroon Atique Babri

Learn More
It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x(i),t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the(More)
Multilayer perceptrons with hard-limiting (signum) activation functions can form complex decision regions. It is well known that a three-layer perceptron (two hidden layers) can form arbitrary disjoint decision regions and a two-layer perceptron (one hidden layer) can form single convex decision regions. This paper further proves that single hidden layer(More)
It has been proved that in one-dimensional cases, the weights of Koho-nen's self-organizing maps (SOM) will become ordered with probability 1; once the weights are ordered, they cannot become disordered in future training. It is difcult to analyze Kohonen's SOMs in multidimensional cases; however , it has been conjectured that similar results seem to be(More)