Recent Advances in Hierarchical Reinforcement Learning

Abstract

Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.

DOI: 10.1023/A:1022140919877

Extracted Key Phrases

8 Figures and Tables

Showing 1-10 of 87 references

On-line q-learning using connectionist systems

  • G A Rummery, M Niranjan
  • 1994
Highly Influential
9 Excerpts

Multilayer control of large markov chains

  • J.-P Forestier, P Varaiya
  • 1978
Highly Influential
4 Excerpts
050'04'06'08'10'12'14'16
Citations per Year

812 Citations

Semantic Scholar estimates that this publication has received between 674 and 977 citations based on the available data.

See our FAQ for additional information.