# Appeared in Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems pp Gain Adaptation Beats Least Squares

@inproceedings{Sutton2004AppearedIP, title={Appeared in Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems pp Gain Adaptation Beats Least Squares}, author={Richard S. Sutton}, year={2004} }

I present computational results suggesting that gain adaptation algorithms based in part on connection ist learning methods may improve over least squares and other classical parameter estimation methods for stochastic time varying linear systems The new algorithms are evaluated with respect to classical methods along three dimensions asymptotic error computational complexity and required prior knowl edge about the system The new algorithms are all of the same order of complexity as LMS methodsâ€¦Â

## References

SHOWING 1-9 OF 9 REFERENCES

### Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta

- Computer ScienceAAAI
- 1992

A new algorithm, the Incremental Delta-Bar-Delta (IDBD) algorithm, for the learning of appropriate biases based on previous learning experience, and a novel interpretation of the IDBD algorithm as an incremental form of hold-one-out cross validation.

### Increased rates of convergence through learning rate adaptation

- Computer ScienceNeural Networks
- 1988

### Practical Characteristics of Neural Network and Conventional Pattern Classifiers on Artificial and Speech Problems

- Computer ScienceNIPS
- 1989

The results suggest that classifier selection should often depend more heavily on practical considerations concerning memory and computation resources, and restrictions on training and classification times than on error rate.

### Experimental Analysis of the Real-time Recurrent Learning Algorithm

- Computer Science
- 1989

A series of simulation experiments are used to investigate the power and properties of the real-time recurrent learning algorithm, a gradient-following learning algorithm for completely recurrent networks running in continually sampled time.

### Iterative Construction of Sparse Polynomial Approximations

- Computer ScienceNIPS
- 1991

The algorithm is shown to discover a known polynomial from samples, and to make accurate estimates of pixel values in an image-processing task, based on the tree-growing heuristic in LMS Trees extended to approximation of arbitrary polynomials of the input features.

### Basis-Function Trees as a Generalization of Local Variable Selection Methods

- Computer ScienceNIPS
- 1990

A tree-structured network is presented which is a generalization of local variable selection and other techniques used in several statistical methods, including CART, ID3, C4, MARS, and others.

### Accelerated Stochastic Approximation

- Computer Science, MathematicsSIAM J. Optim.
- 1993

Convergence with probability 1 is proved for the multidimensional analog of the Kesten accelerated stochastic approximation algorithm.

### Adaptation of cue-speci c learning rates in adaptive n e t works: Computational and psychological perspectives

- Proceedings of the Fourteenth Annual Conference o f t h e C o gnitive Science S o ciety
- 1992