Regional Multi-Armed Bandits

@inproceedings{Wang2018RegionalMB,
  title={Regional Multi-Armed Bandits},
  author={Zhiyang Wang and Ruida Zhou and Cong Shen},
  booktitle={AISTATS},
  year={2018}
}
We consider a variant of the classic multiarmed bandit problem where the expected reward of each arm is a function of an unknown parameter. The arms are divided into different groups, each of which has a common parameter. Therefore, when the player selects an arm at each time slot, information of other arms in the same group is also revealed. This regional bandit model naturally bridges the non-informative bandit setting where the player can only learn the chosen arm, and the global bandit… CONTINUE READING
Related Discussions
This paper has been referenced on Twitter 10 times. VIEW TWEETS

From This Paper

Figures, tables, and topics from this paper.

Citations

Publications citing this paper.

References

Publications referenced by this paper.
Showing 1-10 of 20 references

On upperconfidence bound policies for non-stationary bandit problems

Aurélien Garivier, Eric Moulines
ArXiv e-prints, • 2008
View 9 Excerpts
Highly Influenced

Regret analysis of stochastic and nonstochastic multi-armed bandit problems

S. Bubeck, N. Cesa-Bianchi
Foundations and Trends in Machine Learning, 5(1):1–122 • 2012
View 6 Excerpts
Highly Influenced

Asymptotically Efficient Adaptive Allocation Rules

T L Lai Andherbertrobbins
-1
View 4 Excerpts
Highly Influenced

Similar Papers

Loading similar papers…