Does Aligning Phenotypic and Genotypic Modularity Improve the Evolution of Neural Networks?

  title={Does Aligning Phenotypic and Genotypic Modularity Improve the Evolution of Neural Networks?},
  author={Joost Huizinga and Jean-Baptiste Mouret and Jeff Clune},
  journal={Proceedings of the Genetic and Evolutionary Computation Conference 2016},
Many argue that to evolve artificial intelligence that rivals that of natural animals, we need to evolve neural networks that are structurally organized in that they exhibit modularity, regularity, and hierarchy. It was recently shown that a cost for network connections, which encourages the evolution of modularity, can be combined with an indirect encoding, which encourages the evolution of regularity, to evolve networks that are both modular and regular. However, the bias towards regularity… 

Figures from this paper

Guiding Neuroevolution with Structural Objectives

This work proposes two new structural objectives and tests their ability to guide evolving neural networks on two problems which can benefit from decomposition into subtasks and finds that both methods perform well on a problem with a very clear and decomposable structure.

Modularity in NEAT Reinforcement Learning Networks

It was shown that the ideal level of network modularity in the explored parameter space is highly dependent on other network variables, dispelling theories that modularity has a straightforward relationship to network performance and demonstrating that rewarding modularity directly did not improve fitness.

Evolving Neural Networks through a Reverse Encoding Tree

This paper advances a method which incorporates a type of topological edge coding, named Reverse Encoding Tree (RET), for evolving scalable neural networks efficiently, and demonstrates that RET expends potential future research directions in dynamic environments.


  • Computer Science
  • 2018
The weights of a DNN are evolved with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion, demonstrating the scale at which GAs can operate.

Designing neural networks through neuroevolution

This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.

Balancing selection pressures, multiple objectives, and neural modularity to coevolve cooperative agent behavior

Results demonstrate that fitness rewarding individual behavior is superior to fitness rewarding team behavior, despite being applied to a cooperative task, and networks with multiple modules can discover intelligent behavior, regardless of which type of objectives are used.

Population network structure impacts genetic algorithm optimisation performance

  • A. Vié
  • Computer Science
    GECCO Companion
  • 2021
The Networked Genetic Algorithm is introduced to evaluate how various random and scale-free population networks influence the optimisation performance of GAs on benchmark functions and shows evidence of significant variations in performance of the NGA as the network varies.

Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning

It is shown that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms fail, and expands the sense of the scale at which GAs can operate.

Acetyl-modulated architecture for evolutionary robotics

A model for control evolutionary robots inspired by the effects of acetylcholine neurotransmitters, chemical synapse, renshaw cells, and based on artificial neural networks is presented.

Evolutionary architecture search for deep multitask networks

A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain.



Evolving neural networks that are both modular and regular: HyperNEAT plus the connection cost technique

It is shown that adding the connection cost technique to Hyper NEAT produces neural networks that are significantly more modular, regular, and higher performing than HyperNEAT without a connection cost, even when compared to a variant of HyperNEat that was specifically designed to encourage modularity.

The Evolutionary Origins of Hierarchy

The results suggest that the same force–the cost of connections–promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability.

Constraining connectivity to encourage modularity in HyperNEAT

This paper investigates how altering the traditional approach to determining whether connections are expressed in HyperNEAT influences modularity, and provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity.

The evolutionary origins of modularity

It is demonstrated that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks.

Modularity in Evolution: Some Low-Level Questions ∗

With this framework, the evolutionary advantages that have been attributed to modularity do not derive from modularity per se, Rather, they require that there be an “alignment” between the spaces of phenotypic variation, and the selection gradients that are available to the organism.

On the Performance of Indirect Encoding Across the Continuum of Regularity

This paper presents the first comprehensive study showing that phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases, and suggests a path forward that combines indirect encodings with a separate process of refinement.

Evolving modular neural-networks through exaptation

The results show that the proposed method is efficient to evolve neural networks to solve this task, and the prominent role of multiple selection pressures contradicts the basic assumption that underlies most published modular methods for the evolution of neural networks, in which only the modularity of the genotype is considered.

Compositional pattern producing networks: A novel abstraction of development

Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.

Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

It is suggested that encouraging modularity in neural networks may help to overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.

Evolving coordinated quadruped gaits with the HyperNEAT generative encoding

It is demonstrated that HyperNEAT, a new and promising generative encoding for evolving neural networks, can evolve quadruped gaits without an engineer manually decomposing the problem.