Normalized Gaussian Radial Basis Function networks

Abstract

Abstract: The performances of Normalised RBF (NRBF) nets and standard RBF nets are compared in simple classification and mapping problems. In Normalized RBF networks, the traditional roles of weights and activities in the hidden layer are switched. Hidden nodes perform a function similar to a Voronoi tessellation of the input space, and the output weights become the network's output over the partition defined by the hidden nodes. Consequently, NRBF nets loose the localized characteristics of standard RBF nets and exhibit excellent generalization properties, to the extent that hidden nodes need to be recruited only for training data at the boundaries of class domains. Reflecting this, a new learning rule is proposed that greatly reduces the number of hidden nodes needed in classification tasks. As for mapping applications, it is shown that NRBF nets may outperform standard RBFs nets and exhibit more uniform errors. In both applications, the width of basis functions is uncritical, which makes NRBF nets easy to use.

DOI: 10.1016/S0925-2312(98)00027-7

Extracted Key Phrases

10 Figures and Tables

Statistics

051015'01'03'05'07'09'11'13'15'17
Citations per Year

86 Citations

Semantic Scholar estimates that this publication has 86 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Bugmann1998NormalizedGR, title={Normalized Gaussian Radial Basis Function networks}, author={Guido Bugmann}, journal={Neurocomputing}, year={1998}, volume={20}, pages={97-110} }