Value Function Approximation in Zero-Sum Markov Games

  title={Value Function Approximation in Zero-Sum Markov Games},
  author={Michail G. Lagoudakis and Ronald Parr},
This paper investigates value function approximation in the context of zero-sum Markov games, which can be viewed as a generalization of the Markov decision process (MDP) framework to the two-agent case. We generalize error bounds from MDPs to Markov games and describe generalizations of reinforcement learning algorithms to Markov games. We present a generalization of the optimal stopping problem to a two-player simultaneous move Markov game. For this special problem, we provide stronger bounds… CONTINUE READING
1 Extracted Citations
11 Extracted References
Similar Papers

Similar Papers

Loading similar papers…