Discovering Hierarchy in Reinforcement Learning with HEXQ
@inproceedings{Hengst2002DiscoveringHI, title={Discovering Hierarchy in Reinforcement Learning with HEXQ}, author={B. Hengst}, booktitle={ICML}, year={2002} }
An open problem in reinforcement learning is discovering hierarchical structure. HEXQ, an algorithm which automatically attempts to decompose and solve a model-free factored MDP hierarchically is described. By searching for aliased Markov sub-space regions based on the state variables the algorithm uses temporal and state abstraction to construct a hierarchy of interlinked smaller MDPs.
226 Citations
TeXDYNA: Hierarchical Reinforcement Learning in Factored MDPs
- Computer Science
- SAB
- 2010
- 3
- Highly Influenced
- PDF
Partial Order Hierarchical Reinforcement Learning
- Mathematics, Computer Science
- Australasian Conference on Artificial Intelligence
- 2008
- 5
- PDF
A hierarchical approach to efficient reinforcement learning in deterministic domains
- Computer Science
- AAMAS '06
- 2006
- 23
- PDF
References
SHOWING 1-10 OF 15 REFERENCES
An Overview of MAXQ Hierarchical Reinforcement Learning
- Computer Science
- SARA
- 2000
- 112
- Highly Influential
Generating Hierarchical Structure in Reinforcement Learning from State Variables
- Mathematics, Computer Science
- PRICAI
- 2000
- 22
- PDF
Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition
- Computer Science
- J. Artif. Intell. Res.
- 2000
- 1,326
- Highly Influential
- PDF
Decomposition Techniques for Planning in Stochastic Domains
- Mathematics, Computer Science
- IJCAI
- 1995
- 178
- PDF
Decision-Theoretic Planning: Structural Assumptions and Computational Leverage
- Computer Science
- J. Artif. Intell. Res.
- 1999
- 1,220
- PDF
Hierarchical Solution of Markov Decision Processes using Macro-actions
- Computer Science, Mathematics
- UAI
- 1998
- 301
- PDF