Corpus ID: 233715128

Evolving Evaluation Functions for Collectible Card Game AI

@article{Miernik2021EvolvingEF,
  title={Evolving Evaluation Functions for Collectible Card Game AI},
  author={Radoslaw Miernik and J. Kowalski},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.01115}
}
In this work, we presented a study regarding two important aspects of evolving feature-based game evaluation functions: the choice of genome representation and the choice of opponent used to test the model. We compared three representations. One simpler and more limited, based on a vector of weights that are used in a linear combination of predefined game features. And two more complex, based on binary and n-ary trees. On top of this test, we also investigated the influence of fitness defined… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 39 REFERENCES
Evolutionary Approach to Collectible Card Game Arena Deckbuilding using Active Genes
TLDR
A variant of the evolutionary algorithm that uses a concept of an active gene to reduce the range of the operators only to generation-specific subsequences of the genotype is proposed, and some of the introduced active-genes algorithms tend to learn faster and produce statistically better draft policies than the compared methods. Expand
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
TLDR
This paper generalizes the AlphaZero approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games, and convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go. Expand
The Many AI Challenges of Hearthstone
TLDR
This article analyzes the popular collectible card game Hearthstone published by Blizzard in 2014, and describes a varied set of interesting AI challenges it poses, discovering a few new variations on existing research topics. Expand
Optimizing Hearthstone agents using an evolutionary algorithm
TLDR
The use of evolutionary algorithms to develop agents who play a card game, Hearthstone, by optimizing a data-driven decision-making mechanism that takes into account all the elements currently in play shows how evolutionary computation could represent a considerable advantage in developing AIs for collectible card games such as Hearthstone. Expand
Efficient Heuristic Policy Optimisation for a Challenging Strategic Card Game
TLDR
Results indicate that the N-Tuple Bandit Evolutionary Algorithm can effectively tune the heuristic function parameters to improve the performance of the agent. Expand
Exploring the hearthstone deck space
TLDR
Focusing on deckbuilding, four experiments are conducted to computationally explore the design of Hearthstone and suggest it is possible to find decks with an Evolution Strategy that convincingly beat other decks available in the game, but that they also exhibit some generality. Expand
Drafting in Collectible Card Games via Reinforcement Learning
TLDR
A deep reinforcement learning approach for deck building in arena mode - an understudied game mode present in many collectible card games. Expand
Rolling horizon evolution versus tree search for navigation in single-player real-time games
TLDR
This paper introduces a rolling horizon version of a simple evolutionary algorithm that handles macro-actions and compares it against Monte Carlo Tree Search (MCTS), an approach known to perform well in practice, as well as random search. Expand
Evolving both search and strategy for Reversi players using genetic programming
TLDR
It is shown that the application of genetic programming to the zero-sum, deterministic, full-knowledge board game of Reversi regularly churns out highly competent players and the results prove easy to scale. Expand
Evolving board-game players with genetic programming
TLDR
This work expands previous results in evolving board-state evaluation functions for Lose Checkers to a 10x10 variant of Checkers, as well as Reversi, and implements strongly typed GP trees, explicitly defined introns, and a selective directional crossover method. Expand
...
1
2
3
4
...