Finding Hidden Hierarchy in Reinforcement Learning

@inproceedings{Poulton2005FindingHH,
  title={Finding Hidden Hierarchy in Reinforcement Learning},
  author={G. Poulton and Y. Guo and W. Lu},
  booktitle={KES},
  year={2005}
}
HEXQ is a reinforcement learning algorithm that decomposes a problem into subtasks and constructs a hierarchy using state variables. The maximum number of levels is constrained by the number of variables representing a state. In HEXQ, values learned for a subtask can be reused in different contexts if the subtasks are identical. If not, values for non-identical subtasks need to be trained separately. This paper introduces a method that tackles these two restrictions. Experimental results show… Expand

References

SHOWING 1-8 OF 8 REFERENCES
Discovering Hierarchy in Reinforcement Learning with HEXQ
  • 226
  • Highly Influential
  • PDF
Feudal Reinforcement Learning
  • 539
  • PDF
Reinforcement Learning with a Hierarchy of Abstract Models
  • 132
Reinforcement Learning: An Introduction
  • 27,806
  • PDF
Introduction to Reinforcement Learning
  • 5,397
New faster Kernighan-Lin-type graph-partitioning algorithms
  • S. Dutt
  • Mathematics
  • Proceedings of 1993 International Conference on Computer Aided Design (ICCAD)
  • 1993
  • 55
The bottleneck graph partition problem
  • 7
An efficient heuristic procedure for partitioning graphs
  • 3,556