Multi-armed Bandit Algorithms and Empirical Evaluation

@inproceedings{Vermorel2005MultiarmedBA,
  title={Multi-armed Bandit Algorithms and Empirical Evaluation},
  author={Joann{\`e}s Vermorel and Mehryar Mohri},
  booktitle={ECML},
  year={2005}
}
The multi-armed bandit problem for a gambler is to decide which arm of a K-slot machine to pull to maximize his total reward in a series of trials. Many real-world learning and optimization problems can be modeled in this way. Several strategies or algorithms have been proposed as a solution to this problem in the last two decades, but, to our knowledge, there has been no common evaluation of these algorithms. This paper provides a preliminary empirical evaluation of several multiarmed bandit… CONTINUE READING
Highly Influential
This paper has highly influenced 26 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 327 citations. REVIEW CITATIONS

1 Figure or Table

Topics

Statistics

02040'05'06'07'08'09'10'11'12'13'14'15'16'17'18
Citations per Year

327 Citations

Semantic Scholar estimates that this publication has 327 citations based on the available data.

See our FAQ for additional information.