Efficient evolution of neural network topologies

@article{Stanley2002EfficientEO,
  title={Efficient evolution of neural network topologies},
  author={Kenneth O. Stanley and R. Miikkulainen},
  journal={Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600)},
  year={2002},
  volume={2},
  pages={1757-1762 vol.2}
}
Neuroevolution, i.e. evolving artificial neural networks with genetic algorithms, has been highly effective in reinforcement learning tasks, particularly those with hidden state information. [...] Key Result What results is significantly faster learning.Expand
Evolving Programs to Build Artificial Neural Networks
TLDR
This chapter evolves a pair of programs that build the network, one of which runs inside neurons and allows them to move, change, die or replicate, and the other is executed inside dendrites and allowing them to change length and weight, be removed, or replicate. Expand
Evolving Artificial Neural Networks using Cartesian Genetic Programming
TLDR
This thesis extends Cartesian Genetic Programming such that it can represent recurrent program structures allowing for the creation of recurrent Artificial Neural Networks and is demonstrated to be extremely competitive in the domain of series forecasting. Expand
Reinforcement Learning Benchmarks and Bake-offs II A workshop at the 2005 NIPS conference
Evolution of neural networks, through genetic algorithms or otherwise, has recently emerged as a possible way to solve challenging reinforcement learning problems; NEAT (or NeuroEvolution ofExpand
Competitive Coevolution through Evolutionary Complexification
TLDR
It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures. Expand
Autonomous Evolution of Topographic Regularities in Artificial Neural Networks
TLDR
This letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains. Expand
Neuro-evolution using game-driven cultural algorithms
TLDR
A NAS method based on graph evolution pioneered by Neuro-evolution of Augmenting Topologies (NEAT), but driven by the evolutionary mechanisms underlying Cultural Algorithms (CA), which is a population-based, stochastic optimization system inspired by problem solving in human cultures, suited to solving problems such as NAS. Expand
Coevolution of neural networks using a layered pareto archive
TLDR
A technique is developed that interfaces the LAPCA algorithm with NeuroEvolution of Augmenting Topologies (NEAT), a method to evolve neural networks with demonstrated efficiency in game playing domains and combining NEAT and LAP CA is found to be an effective approach to coevolution. Expand
τ-NEAT
Neuroevolution of Augmenting Topologies (NEAT) has been a very successful algorithm for evolving Artificial Neural Networks (ANNs) that adapt their structure and processing to the task that isExpand
Cartesian genetic programming encoded artificial neural networks: a comparison using three benchmarks
TLDR
The effectiveness of CGPANNs is compared with a large number of previous methods on three benchmark problems and the results show that CGP ANNs perform as well as or better than many other approaches. Expand
Introducing Synaptic Delays in the NEAT Algorithm to Improve Modelling in Cognitive Robotics
TLDR
A new implementation of the NEAT algorithm that is able to generate artificial neural networks (ANNs) with trainable time delayed synapses in addition to its previous capacities is described and tested, showing that this approach improves the behavior of the neural networks obtained when dealing with complex time related processes. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 22 REFERENCES
Evolving Neural Networks through Augmenting Topologies
TLDR
A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously. Expand
Solving Non-Markovian Control Tasks with Neuro-Evolution
TLDR
This article demonstrates a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version, and introduces an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity. Expand
Evolving Optimal Neural Networks Using Genetic Algorithms with Occam's Razor
TLDR
This paper investigates an alternative evolutionary approach-breeder genetic programming (BGP)-in which the architecture and the weights are optimized simultaneously, in which the genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators. Expand
Efficient Reinforcement Learning through Symbiotic Evolution
TLDR
A new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task, is presented. Expand
An evolutionary algorithm that constructs recurrent neural networks
TLDR
It is argued that genetic algorithms are inappropriate for network acquisition and an evolutionary program is described, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. Expand
Incremental Evolution of Complex General Behavior
TLDR
This article proposes an approach wherein complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general, which evolves more effective and more general behavior. Expand
Evolving the Topology and the Weights of Neural Networks Using a Dual Representation
TLDR
A new approach to the construction of neural networks based on evolutionary computation is presented, where a linear chromosome combined to a graph representation of the network are used by genetic operators, which allow the evolution of the architecture and the weights simultaneously without the need of local weight optimization. Expand
Evolving artificial neural networks
TLDR
It is shown, through a considerably large literature review, that combinations between ANNs and EAs can lead to significantly better intelligent systems than relying on ANNs or EAs alone. Expand
Symbiotic Evolution of Neural Networks in Sequential Decision Tasks
TLDR
This research studies the combination of evolutionary algorithms and artificial neural networks to learn and perform difficult decision tasks and develops an efficient system for learning decision strategies in complex problems. Expand
A comparison between cellular encoding and direct encoding for genetic neural networks
TLDR
This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms and solves a more difficult problem: balancing two poles when no information about the velocity is provided as input. Expand
...
1
2
3
...