• Publications
  • Influence
Evolving Neural Networks through Augmenting Topologies
TLDR
A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously. Expand
Incremental Evolution of Complex General Behavior
TLDR
This article proposes an approach wherein complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general, which evolves more effective and more general behavior. Expand
Efficient Reinforcement Learning Through Evolving Neural Network Topologies
TLDR
NEAT shows that when structure is evolved with a principled method of crossover, by protecting structural innovation, and through incremental growth from minimal structure, learning is significantly faster and stronger than with the best fixed-topology methods. Expand
Efficient evolution of neural networks through complexification
TLDR
This dissertation presents the NeuroEvolution of Augmenting Topologies (NEAT) method, which makes search for complex solutions feasible and is first shown faster than traditional approaches on a challenging reinforcement learning benchmark task, and used to successfully discover complex behavior in three challenging domains. Expand
Accelerated Neural Evolution through Cooperatively Coevolved Synapses
TLDR
This paper compares a neuroevolution method called Cooperative Synapse Neuroevolution (CoSyNE), that uses cooperative coevolution at the level of individual synaptic weights, to a broad range of reinforcement learning algorithms on very difficult versions of the pole balancing problem that involve large state spaces and hidden state. Expand
Competitive Coevolution through Evolutionary Complexification
TLDR
It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures. Expand
Efficient Reinforcement Learning through Symbiotic Evolution
TLDR
A new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task, is presented. Expand
Evolving Deep Neural Networks
TLDR
An automated method, CoDeepNEAT, is proposed for optimizing deep learning architectures through evolution by extending existing neuroevolution methods to topology, components, and hyperparameters, which achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. Expand
Robust non-linear control through neuroevolution
TLDR
This dissertation develops a methodology for solving real world control tasks consisting of an efficient neuroevolution algorithm that solves difficult non-linear control tasks by coevolving neurons, an incremental evolution method to scale the algorithm to the most challenging tasks, and a technique for making controllers robust so that they can transfer from simulation to the real world. Expand
Computational Maps in the Visual Cortex
TLDR
This paper describes the development of Maps and Connections and the construction of LISSOM, a Computational Map Model of V1, and the role of plasticity, Hierarchical Model in this development. Expand
...
1
2
3
4
5
...