Evolving Neural Networks through Augmenting Topologies

@article{Stanley2002EvolvingNN,
  title={Evolving Neural Networks through Augmenting Topologies},
  author={Kenneth O. Stanley and Risto Miikkulainen},
  journal={Evolutionary Computation},
  year={2002},
  volume={10},
  pages={99-127}
}
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. [] Key Result NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.

Efficient evolution of neural network topologies

A method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology methods on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously, making it possible to evolve increasingly complex solutions over time.

Advances in Neuroevolution through Augmenting Topologies – A Case Study

An analysis of the efficiency and performance of the various algorithms which have been proposed for Topology and Weight Evolving Artificial Neural Networks (TWEANNs) will provide learners with a better overview of the past and current research trends in the field of Neuroevolution.

A NEAT Visualisation of Neuroevolution Trajectories

A visual and statistical analysis contrasting the behaviour of NEAT, with and without using the crossover operator, when solving the two benchmark problems outlined in the original NEAT article: XOR and double-pole balancing.

Competitive Coevolution through Evolutionary Complexification

It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures.

An empirical comparison of evolution and coevolution for designing artificial neural network game players

  • Min Shi
  • Computer Science
    GECCO '08
  • 2008
A novel neurocoevolutionary algorithm, EEC, is proposed in this work, where the connection weights and the connection paths of networks are evolved separately and demonstrates that fully connected networks could generate noise which results in inefficient learning.

Neuroevolution through Augmenting Topologies Applied to Evolving Neural Networks to Play Othello

A powerful new algorithm for neuroevolution, Neuro-Evolution for Augmenting Topologies (NEAT), is adapted to the game playing domain and illustrated the necessity of the mobility strategy in defeating a powerful positional player in Othello.

Blocky Net: A New NeuroEvolution Method

A new network called Blocky Net with built-in feature selection, and a limited maximum parameter space and complexity is proposed with better performance on 13 of the 20 datasets tested versus 2 for FS-NEAT, and is better than NEAT in all cases.

Meta-NEAT, meta-analysis of neuroevolving topologies

Meta-NEAT offers a way to optimize the convergence rate of NEAT through the use of an additional genetic algorithm built on top ofNEAT which adds an additional layer which learns optimal hyper-parameter configurations in order to boost the convergence rates of NEat.

Automatic Task Decomposition for the NeuroEvolution of Augmenting Topologies (NEAT) Algorithm

An algorithm for evolving MFFN architectures based on the NeuroEvolution of Augmenting Topologies (NEAT) algorithm is presented, outlining an approach to automatically evolving, attributing fitness values and combining the task specific networks in a principled manner.

Evolving Neural Networks through a Reverse Encoding Tree

This paper advances a method which incorporates a type of topological edge coding, named Reverse Encoding Tree (RET), for evolving scalable neural networks efficiently, and demonstrates that RET expends potential future research directions in dynamic environments.
...

References

SHOWING 1-10 OF 57 REFERENCES

Efficient reinforcement learning through symbiotic evolution

A new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task, is presented.

Solving Non-Markovian Control Tasks with Neuro-Evolution

This article demonstrates a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version, and introduces an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity.

An evolutionary algorithm that constructs recurrent neural networks

It is argued that genetic algorithms are inappropriate for network acquisition and an evolutionary program is described, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks.

Forming Neural Networks Through Efficient and Adaptive Coevolution

The symbiotic adaptive neuroevolution system coevolves a population of neurons that cooperate to form a functioning neural network to be more efficient and more adaptive and to maintain higher levels of diversity than the more common network-based population approaches.

Evolving Optimal Neural Networks Using Genetic Algorithms with Occam's Razor

This paper investigates an alternative evolutionary approach-breeder genetic programming (BGP)-in which the architecture and the weights are optimized simultaneously, in which the genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators.

Co-Evolutionary Learning by Automatic Modularisation with Speciation

It is demonstrated why co-evolution can sometimes fail (and fail spectacularly) to cause the desired escalation of expertise, and why a simple gating algorithm for combining these di erent strategies into a single high-level strategy is a way to improve the generalisation ability of co- Evolutionary learning.

Incremental Evolution of Complex General Behavior

This article proposes an approach wherein complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general, which evolves more effective and more general behavior.

Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

The Regent algorithm is presented, which uses domain-specific knowledge to help create an initial population of knowledge-based neural networks and genetic operators of crossover and mutation to continually search for better network topologies.

Non-redundant genetic coding of neural networks

  • D. Thierens
  • Computer Science
    Proceedings of IEEE International Conference on Evolutionary Computation
  • 1996
A neural network genotype representation that completely eliminates the functional redundancies by transforming each neural network into its canonical form is discussed, which is computationally extremely simple.

Designing application-specific neural networks using the structured genetic algorithm

  • D. DasguptaD. McGregor
  • Computer Science
    [Proceedings] COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks
  • 1992
The empirical studies show that the SGA can efficiently determine the network size and topology along with the optimal set of connection weights appropriate for desired tasks, without using backpropagation or any other learning algorithm.
...