Autoencoder-augmented neuroevolution for visual doom playing
@article{Alvernaz2017AutoencoderaugmentedNF, title={Autoencoder-augmented neuroevolution for visual doom playing}, author={Samuel Alvernaz and Julian Togelius}, journal={2017 IEEE Conference on Computational Intelligence and Games (CIG)}, year={2017}, pages={1-8} }
Neuroevolution has proven effective at many re-inforcement learning tasks, including tasks with incomplete information and delayed rewards, but does not seem to scale well to high-dimensional controller representations, which are needed for tasks where the input is raw pixel data. We propose a novel method where we train an autoencoder to create a comparatively low-dimensional representation of the environment observation, and then use CMA-ES to train neural network controllers acting on this…
Figures and Tables from this paper
56 Citations
MONCAE: Multi-Objective Neuroevolution of Convolutional Autoencoders
- Computer ScienceArXiv
- 2021
Results show that images were compressed by a factor of more than 10, while still retaining enough information to achieve image classification for the majority of the tasks, so this new approach can be used to speed up the AutoML pipeline for image compression.
Deep neuroevolution of recurrent and discrete world models
- Computer ScienceGECCO
- 2019
This paper demonstrates the surprising finding that models with the same precise parts can be instead efficiently trained end-to-end through a genetic algorithm (GA), reaching a comparable performance to the original world model by solving a challenging car racing task.
Playing Atari with few neurons
- Computer ScienceAutonomous Agents and Multi-Agent Systems
- 2021
We propose a new method for learning compact state representations and policies separately but simultaneously for policy approximation in vision-based applications such as Atari games. Approaches…
Playing Atari with Six Neurons
- Computer ScienceAAMAS
- 2019
This work proposes a new method for learning policies and compact state representations separately but simultaneously for policy approximation in reinforcement learning, using tiny neural networks of only 6 to 18 neurons.
Improving Deep Neuroevolution via Deep Innovation Protection
- Computer ScienceArXiv
- 2020
This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for 3D environments and investigates the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss.
Evolving inborn knowledge for fast adaptation in dynamic POMDP problems
- Computer ScienceGECCO
- 2020
The analysis of the evolved networks reveals the ability of the proposed algorithm to acquire inborn knowledge in a variety of aspects such as the detection of cues that reveal implicit rewards, and the ability to evolve location neurons that help with navigation.
Playing DOOM using Deep Reinforcement Learning
- Computer Science
- 2017
The Deep Q-learning algorithm is implemented to teach an agent to perform well in various scenarios in the video game classic, DOOM, to achieve near optimal performance.
Learn Effective Representation for Deep Reinforcement Learning
- Computer Science2022 IEEE International Conference on Multimedia and Expo (ICME)
- 2022
This work proposes an end-to-end Large Feature Extractor Network (LFENet) that uses large neural networks with dense connections to train a high-capacity encoder and combines LFENet with Proximal Policy Optimization (PPO) algorithm.
NaturalNets: Simplified Biological Neural Networks for Learning Complex Tasks
- Computer Science, Biology2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2021
A new neural network architecture, called NaturalNet, is presented, which uses a simplified biological neuron model and consists of a set of nonlinear ordinary differential equations, and provides a higher biological plausibility than commonly used neural networks for deep learning applications, while also offering low computational complexity to enable fast training.
Low Dimensional State Representation Learning with Reward-shaped Priors
- Computer Science2020 25th International Conference on Pattern Recognition (ICPR)
- 2021
This work proposes a method that aims at learning a mapping from the observations into a lower-dimensional state space using unsupervised learning using loss functions shaped to incorporate prior knowledge of the environment and the task.
References
SHOWING 1-10 OF 30 REFERENCES
Intrinsically motivated neuroevolution for vision-based reinforcement learning
- Computer Science2011 IEEE International Conference on Development and Learning (ICDL)
- 2011
An unsupervised sensory pre-processor or compressor that is trained on images generated from the environment by the population of evolving recurrent neural network controllers, which reduces the input cardinality of the controllers, but also biases the search toward novel controllers by rewarding those controllers that discover images that it reconstructs poorly.
Backpropagation without human supervision for visual control in Quake II
- Computer Science2009 IEEE Symposium on Computational Intelligence and Games
- 2009
This work evolves a non-visual neural network as supervisor to the visual controller in backpropagation, which creates controllers that learn much faster and have a greater fitness than those learning by neuroevolution-only on the same problem in the same amount of time.
Evolving deep unsupervised convolutional networks for vision-based reinforcement learning
- Computer ScienceGECCO
- 2014
Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input, the first use of deep learning in the context evolutionary RL.
Evolving large-scale neural networks for vision-based reinforcement learning
- Computer ScienceGECCO '13
- 2013
This paper scale-up their compressed network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space.
ViZDoom: A Doom-based AI research platform for visual reinforcement learning
- Computer Science2016 IEEE Conference on Computational Intelligence and Games (CIG)
- 2016
A novel test-bed platform for reinforcement learning research from raw visual information which employs the first-person perspective in a semi-realistic 3D world and confirms the utility of ViZDoom as an AI research platform and implies that visual reinforcement learning in 3D realistic first- person perspective environments is feasible.
Autonomous reinforcement learning on raw visual input data in a real world application
- Computer ScienceThe 2012 International Joint Conference on Neural Networks (IJCNN)
- 2012
A learning architecture, that is able to do reinforcement learning based on raw visual input data, and the resulting policy, learned only by success or failure, is hardly beaten by an experienced human player.
Playing Atari with Deep Reinforcement Learning
- Computer ScienceArXiv
- 2013
This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
Deep auto-encoder neural networks in reinforcement learning
- Computer ScienceThe 2010 International Joint Conference on Neural Networks (IJCNN)
- 2010
A framework for combining the training of deep auto-encoders (for learning compact feature spaces) with recently-proposed batch-mode RL algorithms ( for learning policies) is proposed and an emphasis is put on the data-efficiency and on studying the properties of the feature spaces automatically constructed by the deep Auto-encoder neural networks.
Efficient Non-linear Control Through Neuroevolution
- Computer ScienceECML
- 2006
A novel neuroevolution method called CoSyNE that evolves networks at the level of weights is introduced that is found to be significantly more efficient and powerful than the other methods on these tasks, forming a promising foundation for solving challenging real-world control tasks.
Evolving Deep Neural Networks
- Computer ScienceArtificial Intelligence in the Age of Neural Networks and Brain Computing
- 2019