Evolving Neural Networks through Augmenting Topologies

  title={Evolving Neural Networks through Augmenting Topologies},
  author={Kenneth O. Stanley and Risto Miikkulainen},
  journal={Evolutionary Computation},
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. [] Key Result NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.

Efficient evolution of neural network topologies

A method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology methods on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously, making it possible to evolve increasingly complex solutions over time.

Advances in Neuroevolution through Augmenting Topologies – A Case Study

An analysis of the efficiency and performance of the various algorithms which have been proposed for Topology and Weight Evolving Artificial Neural Networks (TWEANNs) will provide learners with a better overview of the past and current research trends in the field of Neuroevolution.

A NEAT Visualisation of Neuroevolution Trajectories

A visual and statistical analysis contrasting the behaviour of NEAT, with and without using the crossover operator, when solving the two benchmark problems outlined in the original NEAT article: XOR and double-pole balancing.

Competitive Coevolution through Evolutionary Complexification

It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures.

Using Genetic Algorithms to Evolve Artificial Neural Networks

It is demonstrated that neuroevolution is an effective method to determine an optimal neural network topology and appropriate parameter selection is critical in order to efficiently converge to an optimal topology.

An empirical comparison of evolution and coevolution for designing artificial neural network game players

  • M. Shi
  • Computer Science
    GECCO '08
  • 2008
A novel neurocoevolutionary algorithm, EEC, is proposed in this work, where the connection weights and the connection paths of networks are evolved separately and demonstrates that fully connected networks could generate noise which results in inefficient learning.

Neuroevolution through Augmenting Topologies Applied to Evolving Neural Networks to Play Othello

A powerful new algorithm for neuroevolution, Neuro-Evolution for Augmenting Topologies (NEAT), is adapted to the game playing domain and illustrated the necessity of the mobility strategy in defeating a powerful positional player in Othello.

Blocky Net: A New NeuroEvolution Method

A new network called Blocky Net with built-in feature selection, and a limited maximum parameter space and complexity is proposed with better performance on 13 of the 20 datasets tested versus 2 for FS-NEAT, and is better than NEAT in all cases.

Evolving parsimonious networks by mixing activation functions

This work extends the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network, and shows that the produced heterogeneous networks produced using NEAT are significantly smaller than homogeneous networks.

Meta-NEAT, meta-analysis of neuroevolving topologies

Meta-NEAT offers a way to optimize the convergence rate of NEAT through the use of an additional genetic algorithm built on top ofNEAT which adds an additional layer which learns optimal hyper-parameter configurations in order to boost the convergence rates of NEat.



Efficient reinforcement learning through symbiotic evolution

A new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task, is presented.

Solving Non-Markovian Control Tasks with Neuro-Evolution

This article demonstrates a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version, and introduces an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity.

An evolutionary algorithm that constructs recurrent neural networks

It is argued that genetic algorithms are inappropriate for network acquisition and an evolutionary program is described, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks.

Forming Neural Networks Through Efficient and Adaptive Coevolution

The symbiotic adaptive neuroevolution system coevolves a population of neurons that cooperate to form a functioning neural network to be more efficient and more adaptive and to maintain higher levels of diversity than the more common network-based population approaches.

Evolving Optimal Neural Networks Using Genetic Algorithms with Occam's Razor

This paper investigates an alternative evolutionary approach-breeder genetic programming (BGP)-in which the architecture and the weights are optimized simultaneously, in which the genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators.

Co-Evolutionary Learning by Automatic Modularisation with Speciation

It is demonstrated why co-evolution can sometimes fail (and fail spectacularly) to cause the desired escalation of expertise, and why a simple gating algorithm for combining these di erent strategies into a single high-level strategy is a way to improve the generalisation ability of co- Evolutionary learning.

Incremental Evolution of Complex General Behavior

This article proposes an approach wherein complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general, which evolves more effective and more general behavior.

Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

The Regent algorithm is presented, which uses domain-specific knowledge to help create an initial population of knowledge-based neural networks and genetic operators of crossover and mutation to continually search for better network topologies.

Robust non-linear control through neuroevolution

This dissertation develops a methodology for solving real world control tasks consisting of an efficient neuroevolution algorithm that solves difficult non-linear control tasks by coevolving neurons, an incremental evolution method to scale the algorithm to the most challenging tasks, and a technique for making controllers robust so that they can transfer from simulation to the real world.

Genetic evolution of the topology and weight distribution of neural networks

  • V. Maniezzo
  • Computer Science
    IEEE Trans. Neural Networks
  • 1994
This paper proposes a system based on a parallel genetic algorithm with enhanced encoding and operational abilities that has been applied to two widely different problem areas: Boolean function learning and robot control.