Playing Tetris Using Bandit-Based Monte-Carlo Planning

@inproceedings{Cai2011PlayingTU,
  title={Playing Tetris Using Bandit-Based Monte-Carlo Planning},
  author={Zhongjie Cai and Dapeng Zhang and Bernhard Nebel},
  year={2011}
}
Tetris is a stochastic, open-ended board game. Existing artificial Tetris players often use different evaluation functions and plan for only one or two pieces in advance. In this paper, we developed an artificial player for Tetris using the bandit-based Monte-Carlo planning method (UCT). In Tetris, game states are often revisited. However, UCT does not keep the information of the game states explored in the previous planning episodes. We created a method to store such information for our player… CONTINUE READING

Citations

Publications citing this paper.
Showing 1-3 of 3 extracted citations

Monster Carlo: An MCTS-based Framework for Machine Playtesting Unity Games

2018 IEEE Conference on Computational Intelligence and Games (CIG) • 2018
View 7 Excerpts
Highly Influenced

Solving the Physical Traveling Salesman Problem: Tree Search and Macro Actions

IEEE Transactions on Computational Intelligence and AI in Games • 2014
View 1 Excerpt

Monte Carlo Tree Search: Long-term versus short-term planning

2012 IEEE Conference on Computational Intelligence and Games (CIG) • 2012
View 1 Excerpt

References

Publications referenced by this paper.
Showing 1-10 of 10 references

Monte Carlo Go Using Previous Simulation Results

2010 International Conference on Technologies and Applications of Artificial Intelligence • 2010
View 1 Excerpt

Monte-Carlo tree search in Backgammon

François Van Lishout, Guillaume Chaslot, Jos W.H.M. Uiterwijk
2007
View 1 Excerpt

Lőrincz 2, ‘Learning Tetris using the noisy cross-entropy method

István Szita, András
Neural Computation, • 2006
View 1 Excerpt

Some studies in machine learning using the game of checkers

IBM Journal of Research and Development • 2000
View 1 Excerpt

Similar Papers

Loading similar papers…