The Size of MDP Factored Policies

@inproceedings{Liberatore2002TheSO,
  title={The Size of MDP Factored Policies},
  author={Paolo Liberatore},
  booktitle={AAAI/IAAI},
  year={2002}
}
Policies of Markov Decision Processes (MDPs) tell the next action to execute, given the current state and (possibly) the history of actions executed so far. Factorization is used when the number of states is exponentially large: both the MDP and the policy can be then represented using a compact form, for example employing circuits. We prove that there are MDPs whose optimal policies require exponential space evenin factored form. 

From This Paper

Figures, tables, and topics from this paper.

Explore Further: Topics Discussed in This Paper

References

Publications referenced by this paper.

The Complexity of Markov Decision Processes

View 3 Excerpts
Highly Influenced

Similar Papers

Loading similar papers…