Extending Q-Learning to General Adaptive Multi-Agent Systems

@inproceedings{Tesauro2003ExtendingQT,
  title={Extending Q-Learning to General Adaptive Multi-Agent Systems},
  author={Gerald Tesauro},
  booktitle={NIPS},
  year={2003}
}
Recent multi-agent extensions of Q-Learning require knowledge of other agents’ payoffs and Q-functions, and assume game-theoretic play at all times by all other agents. This paper proposes a fundamentally different approach, dubbed “Hyper-Q” Learning, in which values of mixed strategies rather than base actions are learned, and in which other agents’ strategies are estimated from observed actions via Bayesian inference. Hyper-Q may be effective against many different types of adaptive agents… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 89 CITATIONS, ESTIMATED 28% COVERAGE

313 Citations

0204060'06'09'12'15'18
Citations per Year
Semantic Scholar estimates that this publication has 313 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…