Finite Horizon Markov Decision Problems and a Central Limit Theorem for Total Reward

Abstract

We prove a central limit theorem for a class of additive processes that arise naturally in the theory of finite horizon Markov decision problems. The main theorem generalizes a classic result of Dobrushin (1956) for temporally non-homogeneous Markov chains, and the principal innovation is that here the summands are permitted to depend on both the current state and a bounded number of future states of the chain. We show through several examples that this added flexibility gives one a direct path to asymptotic normality of the optimal total reward of finite horizon Markov decision problems. The same examples also explain why such results are not easily obtained by alternative Markovian techniques such as enlargement of the state space. Mathematics Subject Classification (2010): Primary: 60J05, 90C40; Secondary: 60C05, 60F05, 60G42, 90B05, 90C27, 90C39.

1 Figure or Table

Cite this paper

@inproceedings{Arlotto2015FiniteHM, title={Finite Horizon Markov Decision Problems and a Central Limit Theorem for Total Reward}, author={Alessandro Arlotto}, year={2015} }