• Corpus ID: 17751655

Appeared in Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems pp Gain Adaptation Beats Least Squares

@inproceedings{Sutton2004AppearedIP,
  title={Appeared in Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems pp Gain Adaptation Beats Least Squares},
  author={Richard S. Sutton},
  year={2004}
}
I present computational results suggesting that gain adaptation algorithms based in part on connection ist learning methods may improve over least squares and other classical parameter estimation methods for stochastic time varying linear systems The new algorithms are evaluated with respect to classical methods along three dimensions asymptotic error computational complexity and required prior knowl edge about the system The new algorithms are all of the same order of complexity as LMS methods… 

References

SHOWING 1-9 OF 9 REFERENCES

Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta

A new algorithm, the Incremental Delta-Bar-Delta (IDBD) algorithm, for the learning of appropriate biases based on previous learning experience, and a novel interpretation of the IDBD algorithm as an incremental form of hold-one-out cross validation.

Practical Characteristics of Neural Network and Conventional Pattern Classifiers on Artificial and Speech Problems

The results suggest that classifier selection should often depend more heavily on practical considerations concerning memory and computation resources, and restrictions on training and classification times than on error rate.

Experimental Analysis of the Real-time Recurrent Learning Algorithm

A series of simulation experiments are used to investigate the power and properties of the real-time recurrent learning algorithm, a gradient-following learning algorithm for completely recurrent networks running in continually sampled time.

Iterative Construction of Sparse Polynomial Approximations

The algorithm is shown to discover a known polynomial from samples, and to make accurate estimates of pixel values in an image-processing task, based on the tree-growing heuristic in LMS Trees extended to approximation of arbitrary polynomials of the input features.

Basis-Function Trees as a Generalization of Local Variable Selection Methods

A tree-structured network is presented which is a generalization of local variable selection and other techniques used in several statistical methods, including CART, ID3, C4, MARS, and others.

Accelerated Stochastic Approximation

Convergence with probability 1 is proved for the multidimensional analog of the Kesten accelerated stochastic approximation algorithm.

Adaptation of cue-speci c learning rates in adaptive n e t works: Computational and psychological perspectives

  • Proceedings of the Fourteenth Annual Conference o f t h e C o gnitive Science S o ciety
  • 1992