An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons

@article{Risi2012AnEH,
  title={An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons},
  author={Sebastian Risi and Kenneth O. Stanley},
  journal={Artificial Life},
  year={2012},
  volume={18},
  pages={331-363}
}
Intelligence in nature is the product of living brains, which are themselves the product of natural evolution. [] Key Result The main conclusion is that ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover.

HyperNEAT: The First Five Years

This chapter reviews these first 5 years of research that builds upon this approach, and culminates with thoughts on promising future directions.

Deep HyperNEAT : Evolving the Size and Depth of the Substrate Evolutionary

This report describes DeepHyperNEAT, an extension of HyperNEAT to allow it to alter the topology of its indirectlyencoded neural network so that it can continue to grow and increase in complexity over evolution.

Guided self-organization in indirectly encoded and evolving topographic maps

It is shown for the first time that the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method can be seeded to begin evolution with such lateral connectivity, enabling genuine self-organizing dynamics.

On the automated, evolutionary design of neural networks: past, present, and future

This work aims to provide a complete reference of all works related to neuroevolution of convolutional neural networks up to the date, and to the best of its knowledge, this is the best survey reviewing the literature in this field.

Evolving Artificial Neural Networks through L-system and evolutionary computation

A biologically inspired Neuro Evolutive Algorithm able to generate modular, hierarchical and recurrent neural structures as those often found in the nervous system of live beings, and that enable them to solve intricate survival problems is presented.

Evolution of Cartesian Genetic Programs for Development of Learning Neural Architecture

The goal of this paper is to use evolutionary approaches to find suitable computational functions that are analogous to natural sub-components of biological neurons and demonstrate that intelligent behavior can be produced as a result of this additional biological plausibility.

ARTIFICIAL DEVELOPMENT AND EVOLUTION OF ARTIFICIAL NEURAL NETWORKS USING PARAMETRIC L-SYSTEMS WITH MEMORY

This research develops a biologically inspired methodology for automatic design of ANNs using an artificial development system based on a parametric Lindenmayer with memory integrated to a Genetic Algorithm (GA) which simulates artificial evolution, allowing generate architectures ofANNs direct and recurrent with optimal number of neurons and appropriate topology.

Using Indirect Encoding of Multiple Brains to Produce Multimodal Behavior

Novel multimodal extensions to HyperNEAT, a popular indirect encoding for evolving many brains without assuming geometric relationships between them, are introduced and it is shown that multi-brain approaches are more effective than Hyper NEAT without multimodAL extensions, and that brains without a geometric relation to each other outperform situational policy geometry.

A hybrid neuro-evolutive algorithm for neural network optimization

This paper proposes a hybrid neuro-evolutive algorithm (NEA) that uses a compact indirect encoding scheme (IES) for representing its genotypes, moreover has the ability to reuse the genotypes and

A NEUROGENETIC ALGORITHM BASED ON RATIONAL AGENTS Ĺıdio

A biologically inspired NEA that evolves ANNs using these ideas as computational design techniques and the result is an optimized neural network architecture for solving classification problems.
...

References

SHOWING 1-10 OF 80 REFERENCES

A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Evolving the placement and density of neurons in the hyperneat substrate

An extension called evolvable-substrate HyperNEAT (ES-HyperNEAT) is introduced that determines the placement and density of the hidden nodes based on a quadtree-like decomposition of the hypercube of weights and a novel insight about the relationship between connectivity and node placement.

Autonomous Evolution of Topographic Regularities in Artificial Neural Networks

This letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains.

Enhancing es-hyperneat to evolve more complex regular neural networks

Iterated ES-HyperNEAT not only matches but outperforms original HyperNEAT in more complex domains because ES-hyperNEAT can evolve networks with limited connectivity, elaborate on existing network structure, and compensate for movement of information within the hypercube.

Indirectly Encoding Neural Plasticity as a Pattern of Local Rules

This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution.

Evolution of Cartesian Genetic Programs for Development of Learning Neural Architecture

The goal of this paper is to use evolutionary approaches to find suitable computational functions that are analogous to natural sub-components of biological neurons and demonstrate that intelligent behavior can be produced as a result of this additional biological plausibility.

Constraining connectivity to encourage modularity in HyperNEAT

This paper investigates how altering the traditional approach to determining whether connections are expressed in HyperNEAT influences modularity, and provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity.

Investigating whether hyperNEAT produces modular neural networks

The first documented case of HyperNEAT producing a modular phenotype is presented, but the inability to encourage modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that Hyper NEAT and similar algorithms produce modular ANNs in response to challenging, decomposable problems.

On the Performance of Indirect Encoding Across the Continuum of Regularity

This paper presents the first comprehensive study showing that phenotypic regularity enables an indirect encoding to outperform direct encoding controls as problem regularity increases, and suggests a path forward that combines indirect encodings with a separate process of refinement.
...