State Abstraction in MAXQ Hierarchical Reinforcement Learning

Abstract

Many researchers have explored methods for hierarchical reinforcement learning (RL) with temporal abstractions, in which abstract actions are defined that can perform many primitive actions before terminating. However, little is known about learning with state abstractions, in which aspects of the state space are ignored. In previous work, we developed the… (More)

Topics

2 Figures and Tables

Statistics

0510'01'03'05'07'09'11'13'15'17
Citations per Year

74 Citations

Semantic Scholar estimates that this publication has 74 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Dietterich1999StateAI, title={State Abstraction in MAXQ Hierarchical Reinforcement Learning}, author={Thomas G. Dietterich}, booktitle={NIPS}, year={1999} }