# Value Function Approximation in Zero-Sum Markov Games

@article{Lagoudakis2002ValueFA, title={Value Function Approximation in Zero-Sum Markov Games}, author={Michail G. Lagoudakis and Ronald Parr}, journal={CoRR}, year={2002}, volume={abs/1301.0580} }

- Published 2002 in ArXiv

This paper investigates value function approximation in the context of zero-sum Markov games, which can be viewed as a generalization of the Markov decision process (MDP) framework to the two-agent case. We generalize error bounds from MDPs to Markov games and describe generalizations of reinforcement learning algorithms to Markov games. We present a generalization of the optimal stopping problem to a two-player simultaneous move Markov game. For this special problem, we provide stronger bounds… CONTINUE READING

#### From This Paper

##### Figures, tables, results, connections, and topics extracted from this paper.

1 Extracted Citations

11 Extracted References

Similar Papers