Evolving Neural Networks through Augmenting Topologies

@article{Stanley2002EvolvingNN,
  title={Evolving Neural Networks through Augmenting Topologies},
  author={Kenneth O. Stanley and Risto Miikkulainen},
  journal={Evolutionary Computation},
  year={2002},
  volume={10},
  pages={99-127}
}
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. [...] Key Result NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.Expand
Efficient evolution of neural network topologies
TLDR
A method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology methods on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously, making it possible to evolve increasingly complex solutions over time. Expand
Advances in Neuroevolution through Augmenting Topologies – A Case Study
TLDR
An analysis of the efficiency and performance of the various algorithms which have been proposed for Topology and Weight Evolving Artificial Neural Networks (TWEANNs) will provide learners with a better overview of the past and current research trends in the field of Neuroevolution. Expand
Competitive Coevolution through Evolutionary Complexification
TLDR
It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures. Expand
Using Genetic Algorithms to Evolve Artificial Neural Networks
TLDR
It is demonstrated that neuroevolution is an effective method to determine an optimal neural network topology and appropriate parameter selection is critical in order to efficiently converge to an optimal topology. Expand
An empirical comparison of evolution and coevolution for designing artificial neural network game players
  • M. Shi
  • Computer Science
  • GECCO '08
  • 2008
TLDR
A novel neurocoevolutionary algorithm, EEC, is proposed in this work, where the connection weights and the connection paths of networks are evolved separately and demonstrates that fully connected networks could generate noise which results in inefficient learning. Expand
Neuroevolution through Augmenting Topologies Applied to Evolving Neural Networks to Play Othello
TLDR
A powerful new algorithm for neuroevolution, Neuro-Evolution for Augmenting Topologies (NEAT), is adapted to the game playing domain and illustrated the necessity of the mobility strategy in defeating a powerful positional player in Othello. Expand
Blocky Net: A New NeuroEvolution Method
TLDR
A new network called Blocky Net with built-in feature selection, and a limited maximum parameter space and complexity is proposed with better performance on 13 of the 20 datasets tested versus 2 for FS-NEAT, and is better than NEAT in all cases. Expand
Evolving parsimonious networks by mixing activation functions
TLDR
This work extends the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network, and shows that the produced heterogeneous networks produced using NEAT are significantly smaller than homogeneous networks. Expand
Meta-NEAT, meta-analysis of neuroevolving topologies
TLDR
Meta-NEAT offers a way to optimize the convergence rate of NEAT through the use of an additional genetic algorithm built on top ofNEAT which adds an additional layer which learns optimal hyper-parameter configurations in order to boost the convergence rates of NEat. Expand
Automatic Task Decomposition for the NeuroEvolution of Augmenting Topologies (NEAT) Algorithm
TLDR
An algorithm for evolving MFFN architectures based on the NeuroEvolution of Augmenting Topologies (NEAT) algorithm is presented, outlining an approach to automatically evolving, attributing fitness values and combining the task specific networks in a principled manner. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 88 REFERENCES
Efficient Reinforcement Learning through Symbiotic Evolution
TLDR
A new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task, is presented. Expand
Solving Non-Markovian Control Tasks with Neuro-Evolution
TLDR
This article demonstrates a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version, and introduces an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity. Expand
An evolutionary algorithm that constructs recurrent neural networks
TLDR
It is argued that genetic algorithms are inappropriate for network acquisition and an evolutionary program is described, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. Expand
Forming Neural Networks Through Efficient and Adaptive Coevolution
TLDR
The symbiotic adaptive neuroevolution system coevolves a population of neurons that cooperate to form a functioning neural network to be more efficient and more adaptive and to maintain higher levels of diversity than the more common network-based population approaches. Expand
Evolving Optimal Neural Networks Using Genetic Algorithms with Occam's Razor
TLDR
This paper investigates an alternative evolutionary approach-breeder genetic programming (BGP)-in which the architecture and the weights are optimized simultaneously, in which the genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators. Expand
Co-Evolutionary Learning by Automatic Modularisation with Speciation
TLDR
It is demonstrated why co-evolution can sometimes fail (and fail spectacularly) to cause the desired escalation of expertise, and why a simple gating algorithm for combining these di erent strategies into a single high-level strategy is a way to improve the generalisation ability of co- Evolutionary learning. Expand
Incremental Evolution of Complex General Behavior
TLDR
This article proposes an approach wherein complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general, which evolves more effective and more general behavior. Expand
Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies
TLDR
The Regent algorithm is presented, which uses domain-specific knowledge to help create an initial population of knowledge-based neural networks and genetic operators of crossover and mutation to continually search for better network topologies. Expand
Robust non-linear control through neuroevolution
TLDR
This dissertation develops a methodology for solving real world control tasks consisting of an efficient neuroevolution algorithm that solves difficult non-linear control tasks by coevolving neurons, an incremental evolution method to scale the algorithm to the most challenging tasks, and a technique for making controllers robust so that they can transfer from simulation to the real world. Expand
Genetic evolution of the topology and weight distribution of neural networks
  • V. Maniezzo
  • Computer Science, Medicine
  • IEEE Trans. Neural Networks
  • 1994
TLDR
This paper proposes a system based on a parallel genetic algorithm with enhanced encoding and operational abilities that has been applied to two widely different problem areas: Boolean function learning and robot control. Expand
...
1
2
3
4
5
...