A general regression neural network

  title={A general regression neural network},
  author={Donald F. Specht},
  journal={IEEE transactions on neural networks},
  volume={2 6},
  • D. Specht
  • Published 1 November 1991
  • Computer Science
  • IEEE transactions on neural networks
A memory-based network that provides estimates of continuous variables and converges to the underlying (linear or nonlinear) regression surface is described. The general regression neural network (GRNN) is a one-pass learning algorithm with a highly parallel structure. It is shown that, even with sparse data in a multidimensional measurement space, the algorithm provides smooth transitions from one observed value to another. The algorithmic form can be used for any regression problem in which… 

Figures from this paper

Function approximation using backpropagation and general regression neural networks

  • L. MarquezT. Hill
  • Computer Science
    [1993] Proceedings of the Twenty-sixth Hawaii International Conference on System Sciences
  • 1993
The approximation capabilities of backpropagation (BP) neural networks and D. Specht's (1991) general regression neural network (GRNN) are compared using data generated from 14 functions under three

An adjustable model for linear to nonlinear regression

  • T. JanA. Zaknich
  • Computer Science
    IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
  • 1999
This paper examines the effectiveness and utility of combining a linear regression model with general regression neural network or modified probabilistic neural network for better linear extrapolation and function approximation.

Statistical and neural network techniques for nonparametric regression

This paper presents a common taxonomy of statistical and neural network methods for nonparametric regression, and a novel method for adaptive positioning of knots called Constrained Topological Mapping is discussed in some detail.

The general memory neural network and its relationship with basis function architectures

  • A. KolczNigel M. Allinson
  • Computer Science
  • 1999

An oracle based on the general regression neural network

The paper describes how the general regression neural network can be modified to serve as a powerful oracle for combining decisions from multiple arbitrary models.

Radial basis function neural network as predictive process control model

This is an experimental study to compare the performance of the widespread backpropagation network (BP) to the performance of a radial basis function (RBF) and a generalized regression neural network

Adaptive learning schemes for the modified probabilistic neural network

  • A. ZaknichC.J.S. de Silva
  • Computer Science
    Proceedings of 3rd International Conference on Algorithms and Architectures for Parallel Processing
  • 1997
This paper describes adaptive learning schemes for the modified probabilistic neural network for both stationary and nonstationary data statistics and compares the network's learning ability with that of the general regression and radial basis function neural networks.

Learning in a Non-stationary Environment Using the Recursive Least Squares Method and Orthogonal-Series Type Regression Neural Network

In the paper the recursive least squares method, in combining with general regression neural network, is applied for learning in a non-stationary environment and sufficient conditions for convergence in probability are given.



Extensions of a Theory of Networks for Approximation and Learning

A theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that is called Generalized Radial Basis Functions (GRBF), which is not only equivalent to generalized splines, but is closely related to several pattern recognition methods and neural network algorithms.

Probabilistic neural networks

Probabilistic neural networks for classification, mapping, or associative memory

  • D. Specht
  • Computer Science
    IEEE 1988 International Conference on Neural Networks
  • 1988
It can be shown that by replacing the sigmoid activation function often used in neural networks with an exponential function, a neural network can be formed which computes nonlinear decision

Fast Learning in Networks of Locally-Tuned Processing Units

We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken

CMAC: an associative neural network alternative to backpropagation

The CMAC (cerebellar model arithmetic computer) neural network, an alternative to backpropagated multilayer networks, is described and applications in robot control, pattern recognition, and signal processing are briefly described.

Self-Organizing Neural Network for Non-Parametric Regression Analysis

The idea of using Kohonen’s self-organizing maps is applied to the problem of non-parametric regression analysis, i.e. evaluation (approximation) of the unknown function of N-1 variables given a

Integrating neural networks with influence diagrams for multiple sensor diagnostic systems

Since the proposed models do not employ the time-consuming backpropagation and simulated annealing algorithms, this integrated network appears more promising for real-time applications.

Variable Kernel Estimates of Multivariate Densities

A class of density estimates using a superposition of kernels where the kernel parameter can depend on the nearest neighbor distances is studied by the use of simulated data and their performance is superior to that of the usual Parzen estimators.

Series Estimation of a Probability Density Function

A class of nonparametric estimators of f(x) based on a set of n observations has been proved by Parzen [l] to be consistent and asymptotically normal subject to certain conditions. Although quite

Connectionist learning control systems: submarine depth control

Control system design for vehicles with highly nonlinear, time-varying, or poorly modeled dynamics is considered. The use of connectionist systems as learning controllers is proposed. The ability of