Reinforcement Learning in Large Population Models: A Continuity Equation Approach∗

Abstract

We study an evolutionary model in which strategy revision protocols are based on agent specific characteristics rather than wider social characteristics. We assume that agents are primed to play mixed strategies. At any time, the distribution of mixed strategies over agents in a population is described by a probability measure. In each round, a pair of randomly chosen agents play a game, after which they update their mixed strategies using certain reinforcement driven rules based on payoff information. The distribution over mixed strategies thus changes. In a continuous-time limit, this change is described by non-linear continuity equations. We provide a general solution to these equations, which we use to analyze some simple evolutionary scenarios: negative definite symmetric games, doubly symmetric games, generic 2×2 symmetric games, and 2 × 2 asymmetric games. A key finding is that, when agents carry mixed strategies, distributional considerations cannot be subsumed under a classical approach such as the deterministic replicator dynamics.

Cite this paper

@inproceedings{Lahkar2009ReinforcementLI, title={Reinforcement Learning in Large Population Models: A Continuity Equation Approach∗}, author={Ratul Lahkar and Robert M. Seymour}, year={2009} }