We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken… (More)

Stochastic gradient descent is a general algorithm which includes LMS, on-line backpropagation, and adaptive k-means clustering as special cases. The standard choices of the learning rate 1] (both… (More)

We present and compare learning rate schedules for stochastic gradient descent, a general algorithm which includes LMS, on-line backpropagation and k-means clustering as special cases. We introduce… (More)

Stochastic gradient descent is a general algorithm that includes LMS, on-line backpropagation, and adaptive k-means clustering as special cases. The standard choices of the learning rate (both… (More)

Abstract. This paper deals with sparse approximations by means of convex combinations of elements from a predetermined “basis” subset S of a function space. Specifically, the focus is on the rate at… (More)

This paper describes an approach for obtaining very realistic movement paths through a terrain set by applying the properties of a fluid simulation to produce intuitively human-like results. Similar… (More)

Many military simulations and computer entertainment products share a need to model the ability of individual entities (men, tanks, planes, etc.) to see one another and to hide from one another in a… (More)

Many tasks require “reasoning”—i.e., deriving conclusions from a corpus of explicitly stored information—to solve their range of problems. An ideal reasoning system would… (More)

Learning to predict events in the near future is fundamental to human and artificial agents. Many prediction techniques are unable to learn and predict a stream of relational data online when the… (More)