ViZDoom Competitions: Playing Doom From Pixels

@article{Wydmuch2019ViZDoomCP,
  title={ViZDoom Competitions: Playing Doom From Pixels},
  author={Marek Wydmuch and Michal Kempka and Wojciech Jaśkowski},
  journal={IEEE Transactions on Games},
  year={2019},
  volume={11},
  pages={248-259}
}
This paper presents the first two editions of Visual Doom AI Competition, held in 2016 and 2017. [] Key Result The results of the competition lead to the conclusion that, although reinforcement learning can produce capable Doom bots, they still are not yet able to successfully compete against humans in this game.

Towards an AI playing Touhou from pixels: a dataset for real-time semantic segmentation

TLDR
This paper proposes to build a semantic segmentation model to create a bridge to the internal-state-looking AIs, and creates a dataset to train models for this task, which indicates that the models produced have a high classification performance over the validation set.

Deep Reinforcement Learning for Navigation in AAA Video Games

TLDR
This work proposes to use Deep Reinforcement Learning (Deep RL) to learn how to navigate 3D maps using any navigation ability, and finds that this approach performs surprisingly well, achieving at least $90\%$ success rate on all tested scenarios.

Game AI Competitions: Motivation for the Imitation Game-Playing Competition

  • M. Swiechowski
  • Computer Science
    2020 15th Conference on Computer Science and Information Systems (FedCSIS)
  • 2020
TLDR
The goal is to create computer players which can learn and mimic the behavior of particular human players given access to their game records to motivate usefulness of such an approach in various aspects.

Rotation, Translation, and Cropping for Zero-Shot Generalization

TLDR
It is shown that a cropped, translated and rotated observation can get better generalization on unseen levels of two-dimensional arcade games from the GVGAI framework, and explores how rotation, cropping and translation could increase generality.

Deep Q-Network for AI Soccer

TLDR
This work attempts to apply one of the well-known reinforcement learning algorithms, Deep Q-Network, to the AI Soccer game, and was able to successfully train the agents, and its performance was preliminarily proven through the mini-competition against 10 teams wishing to take part in theAI Soccer international competition.

Modified PPO-RND Method for Solving Sparse Reward Problem in ViZDoom

TLDR
A time penalty method and a modified neural network construction method are proposed in this study and the experimental results demonstrate that the addition of a time penalty improved the learning rate by 40% compared to the methods in which time penalty was not added.

Benchmarking End-to-End Behavioural Cloning on Video Games

TLDR
The results show that these agents cannot match humans in raw performance but do learn basic dynamics and rules, and it is demonstrated how the quality of the data matters, and how recording data from humans is subject to a state-action mismatch, due to human reflexes.

AIBPO: Combine the Intrinsic Reward and Auxiliary Task for 3D Strategy Game

TLDR
The experimental results show that the proposed IBPO algorithm is able to deal with the reward sparsity problem effectively and may be applied to real-world scenarios, such as 3-dimensional navigation and automatic driving, which can improve the sample utilization to reduce the cost of interactive sample collected by the real equipment.

Obstacle Tower Without Human Demonstrations: How Far a Deep Feed-Forward Network Goes with Reinforcement Learning

TLDR
This work presents an approach that performed competitively but starts completely from scratch by means of Deep Reinforcement Learning with a relatively simple feed-forward deep network structure to learn how to cope with the Obstacle Tower Challenge.

Flow-based Intrinsic Curiosity Module

TLDR
This paper presents a flow-based intrinsic curiosity module (FICM) to exploit the prediction errors from optical flow estimation as exploration bonuses, and proposes the concept of leveraging motion features captured between consecutive observations to evaluate the novelty of observations in an environment.
...

References

SHOWING 1-10 OF 40 REFERENCES

ViZDoom: A Doom-based AI research platform for visual reinforcement learning

TLDR
A novel test-bed platform for reinforcement learning research from raw visual information which employs the first-person perspective in a semi-realistic 3D world and confirms the utility of ViZDoom as an AI research platform and implies that visual reinforcement learning in 3D realistic first- person perspective environments is feasible.

Playing FPS Games with Deep Reinforcement Learning

TLDR
This paper presents the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states, and substantially outperforms built-in AI agents of the game as well as average humans in deathmatch scenarios.

Playing Doom with SLAM-Augmented Deep Reinforcement Learning

TLDR
Inspired from prior work in human cognition that indicates how humans employ a variety of semantic concepts and abstractions to reason about the world, an agent-model is built that incorporates such abstractions into its policy-learning framework.

Clyde: A Deep Reinforcement Learning DOOM Playing Agent

TLDR
The use of deep reinforcement learn- ing techniques in the context of playing partially observable multi-agent 3D games is presented and Clyde performed very well considering its relative sim- plicity and the fact that it deliberately avoided a high level of customisation to keep the algorithm generic.

Training Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning

TLDR
A new framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom, which combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model) with curriculum learning.

The Text-Based Adventure AI Competition

TLDR
This paper summarizes the three competitions ran in 2016–2018 (including details of open-source implementations of both the competition framework and competitors) and presents the results of an improved evaluation of these competitors across 20 games.

UnrealCV: Connecting Computer Vision to Unreal Engine

TLDR
An open-source plugin UnrealCV is created for a popular game engine Unreal Engine 4 (UE4) to enable researchers to build on these resources to create virtual worlds, provided they can access and modify the internal data structures of the games.

Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents

TLDR
This study leads to the surprising conclusion that UNREAL learns more quickly about larger target stimuli than it does about smaller stimuli, and motivates a specific improvement in the form of a simple model of foveal vision that turns out to significantly boostUNREAL's performance, both on Psychlab tasks, and on standard DeepMind Lab tasks.

Deep Recurrent Q-Learning for Partially Observable MDPs

TLDR
The effects of adding recurrency to a Deep Q-Network is investigated by replacing the first post-convolutional fully-connected layer with a recurrent LSTM, which successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens.

Mastering the game of Go with deep neural networks and tree search

TLDR
Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.