Recent Advances in Hierarchical Reinforcement Learning

Abstract

Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.

DOI: 10.1023/A:1022140919877
View Slides

Extracted Key Phrases

8 Figures and Tables

050'04'06'08'10'12'14'16
Citations per Year

836 Citations

Semantic Scholar estimates that this publication has 836 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Barto2003RecentAI, title={Recent Advances in Hierarchical Reinforcement Learning}, author={Andrew G. Barto and Sridhar Mahadevan}, journal={Discrete Event Dynamic Systems}, year={2003}, volume={13}, pages={41-77} }