Constraining connectivity to encourage modularity in HyperNEAT

  title={Constraining connectivity to encourage modularity in HyperNEAT},
  author={Phillip Verbancsics and Kenneth O. Stanley},
  booktitle={Annual Conference on Genetic and Evolutionary Computation},
A challenging goal of generative and developmental systems (GDS) is to effectively evolve neural networks as complex and capable as those found in nature. Two key properties of neural structures in nature are regularity and modularity. While HyperNEAT has proven capable of generating neural network connectivity patterns with regularities, its ability to evolve modularity remains in question. This paper investigates how altering the traditional approach to determining whether connections are… 

Figures from this paper

Guided self-organization in indirectly encoded and evolving topographic maps

It is shown for the first time that the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method can be seeded to begin evolution with such lateral connectivity, enabling genuine self-organizing dynamics.

Does Aligning Phenotypic and Genotypic Modularity Improve the Evolution of Neural Networks?

Results suggest encouraging modularity in both the genotype and phenotype as an important step towards solving large-scale multi-modal problems, but also indicate that more research is required before the authors can evolve structurally organized networks to solve tasks that require multiple, different neural modules.

Evolving neural networks that are both modular and regular: HyperNEAT plus the connection cost technique

It is shown that adding the connection cost technique to Hyper NEAT produces neural networks that are significantly more modular, regular, and higher performing than HyperNEAT without a connection cost, even when compared to a variant of HyperNEat that was specifically designed to encourage modularity.

Guiding Neuroevolution with Structural Objectives

This work proposes two new structural objectives and tests their ability to guide evolving neural networks on two problems which can benefit from decomposition into subtasks and finds that both methods perform well on a problem with a very clear and decomposable structure.

An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons

ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement.

The modularity in freeform evolving neural networks

This paper validate whether the network modularity can emerge, and the evolution performance can be improved by varying the environment or evolution process under a more freeform artificial evolution, and an artificial tracer method was employed to quantify the modularity.

Encouraging networks modularity by seeding motifs

The results indicate that modularity could be encouraged under certain conditions and networks were able to build networks meeting a desired Z-score.

The Evolutionary Origins of Hierarchy

The results suggest that the same force–the cost of connections–promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability.

Spontaneous Evolution of Modularity in Neural Networks for Robot Locomotion

This work shows the evolution of modularity in neural networks, modelled as dynamically autonomous random boolean networks, that control robots to exhibit phototaxis in a simulated environment and found that modularity emerged when selecting for cyclic attractor in one part of the network and selecting for fixed point attractors in the other part.

On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks

The results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks, and reveal the consequence of the bias of developmental encodings towards regular structures.



Generating large-scale neural networks through discovering geometric regularities

A method, called Hypercube-based Neuroevolution of Augmenting Topologies (HyperNEAT), which evolves a novel generative encoding called connective Compositional Pattern Producing Networks (connective CPPNs) to discover geometric regularities in the task domain, allowing the solution to both generalize and scale without loss of function to an ANN of over eight million connections.

Investigating whether hyperNEAT produces modular neural networks

The first documented case of HyperNEAT producing a modular phenotype is presented, but the inability to encourage modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that Hyper NEAT and similar algorithms produce modular ANNs in response to challenging, decomposable problems.

A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Spontaneous evolution of modularity and network motifs.

  • N. KashtanU. Alon
  • Biology
    Proceedings of the National Academy of Sciences of the United States of America
  • 2005
Light is shed on the evolutionary forces that promote structural simplicity in biological networks and ways to improve the evolutionary design of engineered systems are offered.

A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Autonomous Evolution of Topographic Regularities in Artificial Neural Networks

This letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains.

Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

On the Performance of Indirect Encoding Across the Continuum of Regularity

This paper presents the first comprehensive study showing that phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases, and suggests a path forward that combines indirect encodings with a separate process of refinement.

Evolving coordinated quadruped gaits with the HyperNEAT generative encoding

It is demonstrated that HyperNEAT, a new and promising generative encoding for evolving neural networks, can evolve quadruped gaits without an engineer manually decomposing the problem.

A Taxonomy for Artificial Embryogeny

This taxonomy provides a unified context for long-term research in AE, so that implementation decisions can be compared and contrasted along known dimensions in the design space of embryogenic systems, and allows predicting how the settings of various AE parameters affect the capacity to efficiently evolve complex phenotypes.