A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks

@article{Stanley2009AHE,
  title={A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks},
  author={Kenneth O. Stanley and David B. D'Ambrosio and Jason Gauci},
  journal={Artificial Life},
  year={2009},
  volume={15},
  pages={185-212}
}
Research in neuroevolutionthat is, evolving artificial neural networks (ANNs) through evolutionary algorithmsis inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called… 

An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons

ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement.

Autonomous Evolution of Topographic Regularities in Artificial Neural Networks

This letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains.

Indirectly Encoding Neural Plasticity as a Pattern of Local Rules

This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution.

Enhancing es-hyperneat to evolve more complex regular neural networks

Iterated ES-HyperNEAT not only matches but outperforms original HyperNEAT in more complex domains because ES-hyperNEAT can evolve networks with limited connectivity, elaborate on existing network structure, and compensate for movement of information within the hypercube.

Evolving the placement and density of neurons in the hyperneat substrate

An extension called evolvable-substrate HyperNEAT (ES-HyperNEAT) is introduced that determines the placement and density of the hidden nodes based on a quadtree-like decomposition of the hypercube of weights and a novel insight about the relationship between connectivity and node placement.

A unified approach to evolving plasticity and neural geometry

The most interesting aspect of this investigation is that the emergent neural structures are beginning to acquire more natural properties, which means that neuroevolution can begin to pose new problems and answer deeper questions about how brains evolved that are ultimately relevant to the field of AI as a whole.

Evolving neural networks that are both modular and regular: HyperNEAT plus the connection cost technique

It is shown that adding the connection cost technique to Hyper NEAT produces neural networks that are significantly more modular, regular, and higher performing than HyperNEAT without a connection cost, even when compared to a variant of HyperNEat that was specifically designed to encourage modularity.

Evolving neural fields for problems with large input and output spaces

Designing neural networks through neuroevolution

This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.

Evolving Artificial Neural Networks using Cartesian Genetic Programming

This thesis extends Cartesian Genetic Programming such that it can represent recurrent program structures allowing for the creation of recurrent Artificial Neural Networks and is demonstrated to be extremely competitive in the domain of series forecasting.
...

References

SHOWING 1-10 OF 66 REFERENCES

Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

An evolutionary algorithm that constructs recurrent neural networks

It is argued that genetic algorithms are inappropriate for network acquisition and an evolutionary program is described, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks.

Competitive Coevolution through Evolutionary Complexification

It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures.

A comparison between cellular encoding and direct encoding for genetic neural networks

This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms and solves a more difficult problem: balancing two poles when no information about the velocity is provided as input.

Solving Non-Markovian Control Tasks with Neuro-Evolution

This article demonstrates a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version, and introduces an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity.

Compositional pattern producing networks: A novel abstraction of development

Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.

A Taxonomy for Artificial Embryogeny

This taxonomy provides a unified context for long-term research in AE, so that implementation decisions can be compared and contrasted along known dimensions in the design space of embryogenic systems, and allows predicting how the settings of various AE parameters affect the capacity to efficiently evolve complex phenotypes.

Evolving better representations through selective genome growth

  • L. Altenberg
  • Biology
    Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence
  • 1994
A new method is described in which the degrees of freedom of the representation are increased incrementally, creating genotype-phenotype maps that are exquisitely tuned to the specifics of the epistatic fitness function, creating adaptive landscapes that are much smoother than generic NK landscapes with the same genotypes.

Evolving a neurocontroller through a process of embryogeny

The New AI hypothesizes that intelligent behaviour must be understood within the framework provided by the agent’s physical interactions with the environment: subjective sensations and bodily interactions, and proposes a bottom-up exploration, which starts from the lowest adaptive mechanisms to reach the topmost cognitive abilities.

Creating High-Level Components with a Generative Representation for Body-Brain Evolution

Applying GENRE to the task of evolving robots for locomotion and comparing it against a non-generative (direct) representation shows that the generative representation system rapidly produces robots with significantly greater fitness.
...