An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning

Abstract

We show that linear value-function approximation is equivalent to a form of linear model approximation. We then derive a relationship between the model-approximation error and the Bellman error, and show how this relationship can guide feature selection for model improvement and/or value-function improvement. We also show how these results give insight into the behavior of existing feature-selection algorithms.

DOI: 10.1145/1390156.1390251

Extracted Key Phrases

010202008200920102011201220132014201520162017
Citations per Year

112 Citations

Semantic Scholar estimates that this publication has 112 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Parr2008AnAO, title={An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning}, author={Ronald E. Parr and Lihong Li and Gavin Taylor and Christopher Painter-Wakefield and Michael L. Littman}, booktitle={ICML}, year={2008} }