Autonomous Evolution of Topographic Regularities in Artificial Neural Networks

  title={Autonomous Evolution of Topographic Regularities in Artificial Neural Networks},
  author={Jason Gauci and Kenneth O. Stanley},
  journal={Neural Computation},
Looking to nature as inspiration, for at least the past 25 years, researchers in the field of neuroevolution (NE) have developed evolutionary algorithms designed specifically to evolve artificial neural networks (ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive characteristics of biological brains, perhaps explaining why NE is not yet a mainstream subject of neural computation. Motivated by this gap, this letter shows that when geometry is introduced to evolved ANNs… 

Towards Evolving More Brain-Like Artificial Neural Networks

The combined approach, adaptive ES-HyperNEAT, unifies for the first time in neuroevolution the abilities to indirectly encode connectivity through geometry, generate patterns of heterogeneous plasticity, and simultaneously encode the density and placement of nodes in space.

A unified approach to evolving plasticity and neural geometry

The most interesting aspect of this investigation is that the emergent neural structures are beginning to acquire more natural properties, which means that neuroevolution can begin to pose new problems and answer deeper questions about how brains evolved that are ultimately relevant to the field of AI as a whole.

Indirectly Encoding Neural Plasticity as a Pattern of Local Rules

This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution.


This research develops a biologically inspired methodology for automatic design of ANNs using an artificial development system based on a parametric Lindenmayer with memory integrated to a Genetic Algorithm (GA) which simulates artificial evolution, allowing generate architectures ofANNs direct and recurrent with optimal number of neurons and appropriate topology.

Evolving Artificial Neural Networks through L-system and evolutionary computation

A biologically inspired Neuro Evolutive Algorithm able to generate modular, hierarchical and recurrent neural structures as those often found in the nervous system of live beings, and that enable them to solve intricate survival problems is presented.

An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons

ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement.

Designing neural networks through neuroevolution

This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.

Safe mutations for deep and recurrent neural networks through output gradients

A family of safe mutation (SM) operators that facilitate exploration without dramatically altering network behavior or requiring additional interaction with the environment are proposed, which dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks.


A biologically inspired NEA that evolves ANNs using these ideas as computational design techniques and the result is an optimized neural network architecture for solving classification problems.

Guided self-organization in indirectly encoded and evolving topographic maps

It is shown for the first time that the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method can be seeded to begin evolution with such lateral connectivity, enabling genuine self-organizing dynamics.



A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Efficient evolution of neural network topologies

A method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology methods on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously, making it possible to evolve increasingly complex solutions over time.

Neuroevolution: from architectures to learning

This paper gives an overview of the most prominent methods for evolving ANNs with a special focus on recent advances in the synthesis of learning architectures.

Efficient Reinforcement Learning Through Evolving Neural Network Topologies

NEAT shows that when structure is evolved with a principled method of crossover, by protecting structural innovation, and through incremental growth from minimal structure, learning is significantly faster and stronger than with the best fixed-topology methods.

Generating large-scale neural networks through discovering geometric regularities

A method, called Hypercube-based Neuroevolution of Augmenting Topologies (HyperNEAT), which evolves a novel generative encoding called connective Compositional Pattern Producing Networks (connective CPPNs) to discover geometric regularities in the task domain, allowing the solution to both generalize and scale without loss of function to an ANN of over eight million connections.

Accelerated Neural Evolution through Cooperatively Coevolved Synapses

This paper compares a neuroevolution method called Cooperative Synapse Neuroevolution (CoSyNE), that uses cooperative coevolution at the level of individual synaptic weights, to a broad range of reinforcement learning algorithms on very difficult versions of the pole balancing problem that involve large state spaces and hidden state.

Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

A Developmental Model for the Evolution of Artificial Neural Networks

This work presents a model of decentralized growth and development for artificial neural networks (ANNs), inspired by developmental biology and the physiology of nervous systems, and demonstrates the power of the artificial chemistry by analyzing engineered genomes that lead to the growth of simple networks with behaviors known from physiology.

How novelty search escapes the deceptive trap of learning to learn

A way to escape the deceptive trap of static policies based on the novelty search algorithm is proposed, which opens up a new avenue in the evolution of adaptive systems because it can exploit the behavioral difference between learning and non-learning individuals.

Evolving Dynamical Neural Networks for Adaptive Behavior

It is demonstrated that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers.