Portfolio Search and Optimization for General Strategy Game-Playing

  title={Portfolio Search and Optimization for General Strategy Game-Playing},
  author={Alexander Dockhorn and Jorge Hurtado Grueso and Dominik Jeurissen and Linjie Xu and Diego Perez Liebana},
  journal={2021 IEEE Congress on Evolutionary Computation (CEC)},
Portfolio methods represent a simple but efficient type of action abstraction which has shown to improve the performance of search-based agents in a range of strategy games. We first review existing portfolio techniques and propose a new algorithm for optimization and action-selection based on the Rolling Horizon Evolutionary Algorithm. Moreover, a series of variants are developed to solve problems in different aspects. We further analyze the performance of discussed agents in a general… 

Figures and Tables from this paper

Generating Diverse and Competitive Play-Styles for Strategy Games

This paper proposes Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes) and shows how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.

Towards Applicable State Abstractions: a Preview in Strategy Games

An overview of related studies of state abstraction is given and strategy games are proposed to become a prior platform to address open problems and study the application of domain-independent state abstraction.

Elastic Monte Carlo Tree Search with State Abstraction for Strategy Game Playing

Elastic MCTS is proposed, an algorithm that uses state abstraction to play strategy games, where the nodes of the tree are clustered dynamically, first grouped together progressively by state abstraction, and then separated when an iteration threshold is reached.

Game State and Action Abstracting Monte Carlo Tree Search for General Strategy Game-Playing

A new variant of Monte Carlo Tree Search which can incorporate action and game state abstractions is proposed and a game state encoding for turn-based strategy games that allows for a flexible abstraction is developed.



Portfolio greedy search and simulation for large-scale combat in starcraft

This paper presents an efficient system for modelling abstract RTS combat called SparCraft, which can perform millions of unit actions per second and visualize them, and presents a modification of the UCT algorithm capable of performing search in games with simultaneous and durative actions.

STRATEGA: A General Strategy Games Framework

This paper motivates and presents STRATEGA a general strategy games framework for playing n-player turn-based and real-time strategy games, and presents some sample rule-based agents as well as searchbased agents and quantitatively analyse their performance to demonstrate the use of the framework.

The Design Of "Stratega": A General Strategy Games Framework

The framework has been built with a focus of statistical forward planning agents, which have shown great flexibility in general game-playing, but their performance is limited in case of complex state and action-spaces.

Planning Algorithms for Zero-Sum Games with Exponential Action Spaces: A Unifying Perspective

This paper reviews several planning algorithms developed for zero-sum games with exponential action spaces, and presents a unifying perspective in which several existing algorithms can be described as an instantiation of a variant of Naı̈veMCTS.

Grandmaster level in StarCraft II using multi-agent reinforcement learning

The agent, AlphaStar, is evaluated, which uses a multi-agent reinforcement learning algorithm and has reached Grandmaster level, ranking among the top 0.2% of human players for the real-time strategy game StarCraft II.

General Video Game Artificial Intelligence

Research on general video game playing aims at designing agents or content generators that can perform well in multiple video games, possibly without knowing the game in advance and with little or no prior knowledge of the game.

I and J

Nested-Greedy Search for Adversarial Real-Time Games

An idealized algorithm that is guaranteed to return the best action and an approximation of such algorithm, which is called Nested-Greedy Search (NGS), are described and empirical results show that NGS is able to outperform PGS as well as state-of-the-art methods in matches played in small to medium-sized maps.

The N-Tuple Bandit Evolutionary Algorithm for Game Agent Optimisation

The N-Tuple Bandit Evolutionary Algorithm is described, an optimisation algorithm developed for noisy and expensive discrete (combinatorial) optimisation problems that significantly outperforms grid search and an estimation of distribution algorithm.