Co-evolving Real-Time Strategy Game Micro

  title={Co-evolving Real-Time Strategy Game Micro},
  author={Navin K. Adhikari and Sushil J. Louis and Siming Liu and Walker Spurgeon},
  journal={2018 IEEE Symposium Series on Computational Intelligence (SSCI)},
We investigate competitive co-evolution of unit micromanagement in real-time strategy games. Although good long-term macro-strategy and good short-term unit micromanagement both impact real-time strategy games performance, this paper focuses on generating quality micro. Better micro, for example, can help players win skirmishes and battles even when outnumbered. Prior work has shown that we can evolve micro to beat a given opponent. We remove the need for a good opponent to evolve against by… 
Multi-objective cooperative co-evolution of micro for RTS games
The potential of a multi-objective, cooperative, co-evolutionary algorithm to evolve control tactics for groups composed from multiple types of units in real-time strategy games with application in multi-agent control, robotics, and other heterogeneous system control problems is indicated.
Comparing Three Approaches to Micro in RTS Games
Three promising approaches to micromanaging units in real-time strategy games using a two-objective pareto optimal fitness function that maximizes damage done to opponent units and minimizes damage received by friendly units are compared.
Coevolutionary Algorithm for Evolving Competitive Strategies in the Weapon Target Assignment Problem
This paper considers a non-cooperative real-time strategy game between two teams; each has multiple homogeneous players with identical capabilities. In particular, the first team consists of multiple
Classical Formation Patterns and Flanking Strategies as a Result of Utility Maximization
In this letter, we show how classical tactical formation patterns and flanking strategies, such as the line formation and the enveloping maneuver, can be seen as the result of maximizing a natural


Evolving Effective Microbehaviors in Real-Time Strategy Games
This paper uses influence maps and potential fields as a basis representation to evolve short-term positioning and movement tactics and believes its representation and approach can result in effective microperformance against melee and ranged opponents and provides a viable approach toward complete RTS bots.
Towards Intelligent Team Composition and Maneuvering in Real-Time Strategy Games
It is demonstrated that well-known computational intelligence techniques applied in an original way work well separately, but also that they go together naturally, thereby leading to an improved and flexible group behavior.
Co-Evolving Influence Map Tree Based Strategy Game Players
Players are encoded within the individuals of a genetic algorithm and co-evolved against each other, with results showing the production of strategies that are innovative, robust, and capable of defeating a suite of hand-coded opponents.
Intelligent moving of groups in real-time strategy games
This paper investigates the intelligent moving and path-finding of groups in real-time strategy (RTS) games exemplified by the open source game Glest and combines flocking with influence maps (IM) to find safe paths for the flock in real time.
Comparing coevolution, genetic algorithms, and hill-climbers for finding real-time strategy game plans
The results here show that coevolved strategies win or tie against hill-climber and genetic algorithm strategies eighty percent of the time but routinely lose to the three hand coded baselines that provide a quantitative baseline for comparison with other strategy search algorithms.
Coevolving influence maps for spatial team tactics in a RTS game
This research presents a unique Influence Map representation, with a coevolutionary technique that evolves the maps together for a group of entities that allows the creation of autonomous entities that can move in a coordinated manner.
Applying reinforcement learning to small scale combat in the real-time strategy game StarCraft:Broodwar
  • S. Wender, I. Watson
  • Computer Science
    2012 IEEE Conference on Computational Intelligence and Games (CIG)
  • 2012
Evaluation of the suitability of reinforcement learning algorithms to perform the task of micro-managing combat units in the commercial real-time strategy (RTS) game StarCraft:Broodwar finds one-step Q-learning and Sarsa prove best at learning to manage combat units.
Using multi-agent potential fields in real-time strategy games
This work presents a Multi-agent Potential Field based bot architecture that is evaluated in a real time strategy game setting and compares it, both in terms of performance, and in Terms of softer attributes such as configurability with other state-of-the-art solutions.
Fast Heuristic Search for RTS Game Combat Scenarios
A fast search method — Alpha-Beta search for durative moves — that can defeat commonly used AI scripts in RTS game combat scenarios of up to 8 vs. 8 units running on a single core in under 5ms per search episode is presented.
A multi-objective genetic algorithm for simulating optimal fights in StarCraft II
  • J. Schmitt, H. Köstler
  • Computer Science
    2016 IEEE Conference on Computational Intelligence and Games (CIG)
  • 2016
A multi-objective genetic algorithm for simulating optimal fights between arbitrary units in the real-time strategy game StarCraft II is developed and a general behavior model is developed which allows controlling units in an optimal way based on a number of real-valued parameters.