• Corpus ID: 227208988

# TStarBot-X: An Open-Sourced and Comprehensive Study for Efficient League Training in StarCraft II Full Game

@article{Han2020TStarBotXAO,
title={TStarBot-X: An Open-Sourced and Comprehensive Study for Efficient League Training in StarCraft II Full Game},
author={Lei Han and Jiechao Xiong and Peng Sun and Xinghai Sun and Meng Fang and Qingwei Guo and Qiaobo Chen and Tengfei Shi and Hongsheng Yu and Zhengyou Zhang},
journal={ArXiv},
year={2020},
volume={abs/2011.13729}
}
• Published 27 November 2020
• Computer Science
• ArXiv
StarCraft, one of the most difficult esport games with long-standing history of professional tournaments, has attracted generations of players and fans, and also, intense attentions in artificial intelligence research. Recently, Google's DeepMind announced AlphaStar, a grandmaster level AI in StarCraft II. In this paper, we introduce a new AI agent, named TStarBot-X, that is trained under limited computation resources and can play competitively with expert human players. TStarBot-X takes…

## Figures and Tables from this paper

SCC: an efficient deep reinforcement learning agent mastering the game of StarCraft II
• Computer Science
ICML
• 2021
A deep reinforcement learning agent, StarCraft Commander (SCC), is proposed with order of magnitude less computation, which demonstrates top human performance defeating GrandMaster players in test matches and top professional players in a live event.
Applying supervised and reinforcement learning methods to create neural-network-based agents for playing StarCraft II
A neural network architecture for playing the full two-player match of StarCraft II trained with general-purpose supervised and reinforcement learning, that can be trained on a single consumer-grade PC with a single GPU and achieves a non-trivial performance when compared to the in-game scripted bots.
Gym-µRTS: Toward Affordable Full Game Real-time Strategy Games Research with Deep Reinforcement Learning
• Computer Science
2021 IEEE Conference on Games (CoG)
• 2021
Gym-JLRTS (pronounced “gym-micro-RTS”) is introduced as a fast-to-run RL environment for full-game RTS research and a collection of techniques to scale DRL to play full- game µRTS as well as ablation studies to demonstrate their empirical importance.
Gym-$\mu$RTS: Toward Affordable Full Game Real-time Strategy Games Research with Deep Reinforcement Learning
• Computer Science
• 2021
Gym-μRTS (pronounced “gym-micro-RTS”) is introduced as a fast-to-run RL environment for full-game RTS research and a collection of techniques to scale DRL to play full- game μRTS as well as ablation studies to demonstrate their empirical importance.
Learning Macromanagement in Starcraft by Deep Reinforcement Learning
• Computer Science
Sensors
• 2021
A novel deep RL method, Mean Asynchronous Advantage Actor-Critic (MA3C), which computes the approximate expected policy gradient instead of the gradient of sampled action to reduce the variance of thegradient, and encode the history queue with recurrent neural network to tackle the problem of imperfect information.
Exploration in Deep Reinforcement Learning: A Comprehensive Survey
• Computer Science
ArXiv
• 2021
A comprehensive and unified empirical comparison of different exploration methods for DRL on a set of commonly used benchmarks and summarizes the open problems of exploration in DRL and deep MARL and point out a few future directions.
Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems
• Computer Science
AAMAS
• 2021
It is argued that behavioural diversity is a pivotal, yet under-explored, component for real-world multiagent learning systems, and that significant work remains in understanding how to design a diversity-aware auto-curriculum.
On games and simulators as a platform for development of artificial intelligence for command and control
• Computer Science
The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology
• 2022
Past and current efforts on how games and simulators have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield are discussed.
Game State and Action Abstracting Monte Carlo Tree Search for General Strategy Game-Playing
• Computer Science
2021 IEEE Conference on Games (CoG)
• 2021
A new variant of Monte Carlo Tree Search which can incorporate action and game state abstractions is proposed and a game state encoding for turn-based strategy games that allows for a flexible abstraction is developed.
Rethinking of AlphaStar
A different view is presented for AlphaStar (AS), the program achieving Grand-Master level in the game StarCraft II, based on a reproduction code of the AS, and some of which are defects of it and important details that are neglected in its article are presented.

## References

SHOWING 1-10 OF 29 REFERENCES
TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in the Full Game
• Computer Science
ArXiv
• 2018
This is the first public work to investigate AI agents that can defeat the built-in AI in the StarCraft II full game, and the AI agent TStarBot1 is based on deep reinforcement learning over a flat action structure and theAI agent T starBot2 isbased on hard-coded rules over a hierarchical action structure.
Grandmaster level in StarCraft II using multi-agent reinforcement learning
• Computer Science
Nature
• 2019
The agent, AlphaStar, is evaluated, which uses a multi-agent reinforcement learning algorithm and has reached Grandmaster level, ranking among the top 0.2% of human players for the real-time strategy game StarCraft II.
StarCraft II: A New Challenge for Reinforcement Learning
• Computer Science
ArXiv
• 2017
This paper introduces SC2LE (StarCraft II Learning Environment), a reinforcement learning environment based on the StarCraft II game that offers a new and challenging environment for exploring deep reinforcement learning algorithms and architectures and gives initial baseline results for neural networks trained from this data to predict game outcomes and player actions.
Mastering the game of Go without human knowledge
• Computer Science
Nature
• 2017
An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
Open-ended Learning in Symmetric Zero-sum Games
• Economics
ICML
• 2019
A geometric framework for formulating agent objectives in zero-sum games is introduced, and a new algorithm (rectified Nash response, PSRO_rN) is developed that uses game-theoretic niching to construct diverse populations of effective agents, producing a stronger set of agents than existing algorithms.
Mastering the game of Go with deep neural networks and tree search
• Computer Science
Nature
• 2016
Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks
• Computer Science
ArXiv
• 2016
A heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation and allows for the collection of traces for learning using deterministic policies, which appears much more efficient than, for example, {\epsilon}-greedy exploration.
TLeague: A Framework for Competitive Self-Play based Distributed Multi-Agent Reinforcement Learning
• Computer Science
ArXiv
• 2020
A framework, referred to as TLeague, that aims at large-scale training and implements several main-stream CSP-MARL algorithms, which achieves a high throughput and a reasonable scale-up when performing distributed training.
StarCraft II Build Order Optimization using Deep Reinforcement Learning and Monte-Carlo Tree Search
• Computer Science
ArXiv
• 2020
The experimental results accomplished using Monte-Carlo Tree Search achieves a score similar to a novice human player by only using very limited time and computational resources, which paves the way to achieving scores comparable to those of a human expert by combining it with the use of deep reinforcement learning.
Hierarchial Reinforcement Learning in StarCraft II with Human Expertise in Subgoals Selection
• Computer Science
ArXiv
• 2020
Experimental results in two StarCraft II (SC2) minigames demonstrate that the proposed new method can achieve better sample efficiency than flat and end-to-end RL methods, and provides an effective method for explaining the agent's performance.