• Corpus ID: 7335456

A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks

@inproceedings{Stanley2009AHI,
  title={A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks},
  author={Kenneth O. Stanley},
  year={2009}
}
Research in neuroevolution, i.e. evolving artificial neural networks (ANNs) through evolutionary algorithms, is inspired by the evolution of biological brains. Because natural evolution discovered intelligent brains with billions of neurons and trillions of connections, perhaps neuroevolution can do the same. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. This paper presents a method called Hypercube-based… 

An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons

ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement.

Towards Evolving More Brain-Like Artificial Neural Networks

The combined approach, adaptive ES-HyperNEAT, unifies for the first time in neuroevolution the abilities to indirectly encode connectivity through geometry, generate patterns of heterogeneous plasticity, and simultaneously encode the density and placement of nodes in space.

Indirectly Encoding Neural Plasticity as a Pattern of Local Rules

This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution.

A unified approach to evolving plasticity and neural geometry

The most interesting aspect of this investigation is that the emergent neural structures are beginning to acquire more natural properties, which means that neuroevolution can begin to pose new problems and answer deeper questions about how brains evolved that are ultimately relevant to the field of AI as a whole.

Designing neural networks through neuroevolution

This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.

Enhancing es-hyperneat to evolve more complex regular neural networks

Iterated ES-HyperNEAT not only matches but outperforms original HyperNEAT in more complex domains because ES-hyperNEAT can evolve networks with limited connectivity, elaborate on existing network structure, and compensate for movement of information within the hypercube.

Automatic synthesis of working memory neural networks with neuroevolution methods

EvoNeuro encoding is considered, an encoding directly inspired from computational neuroscience and its efficiency on a working memory task, namely the AX-CPT task is tested and networks generated are more versatile and can adapt to the new task through a simple parameter optimization.

Safe mutations for deep and recurrent neural networks through output gradients

A family of safe mutation (SM) operators that facilitate exploration without dramatically altering network behavior or requiring additional interaction with the environment are proposed, which dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks.

Learning From Geometry In Learning For Tactical And Strategic Decision Domains

This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT), based on a novel indirect encoding of ANNs, designed to work in tactical and strategic decision domains.

DEEP NEUROEVOLUTION: GENETIC ALGORITHMS

  • Computer Science
  • 2018
The weights of a DNN are evolved with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion, demonstrating the scale at which GAs can operate.
...

References

SHOWING 1-10 OF 63 REFERENCES

Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

An evolutionary algorithm that constructs recurrent neural networks

It is argued that genetic algorithms are inappropriate for network acquisition and an evolutionary program is described, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks.

Competitive Coevolution through Evolutionary Complexification

It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures.

A comparison between cellular encoding and direct encoding for genetic neural networks

This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms and solves a more difficult problem: balancing two poles when no information about the velocity is provided as input.

A Taxonomy for Artificial Embryogeny

This taxonomy provides a unified context for long-term research in AE, so that implementation decisions can be compared and contrasted along known dimensions in the design space of embryogenic systems, and allows predicting how the settings of various AE parameters affect the capacity to efficiently evolve complex phenotypes.

Compositional pattern producing networks: A novel abstraction of development

Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.

Solving Non-Markovian Control Tasks with Neuro-Evolution

This article demonstrates a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version, and introduces an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity.

Evolving a neurocontroller through a process of embryogeny

The New AI hypothesizes that intelligent behaviour must be understood within the framework provided by the agent’s physical interactions with the environment: subjective sensations and bodily interactions, and proposes a bottom-up exploration, which starts from the lowest adaptive mechanisms to reach the topmost cognitive abilities.

Evolving better representations through selective genome growth

  • L. Altenberg
  • Biology
    Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence
  • 1994
A new method is described in which the degrees of freedom of the representation are increased incrementally, creating genotype-phenotype maps that are exquisitely tuned to the specifics of the epistatic fitness function, creating adaptive landscapes that are much smoother than generic NK landscapes with the same genotypes.

Exploiting Regularity Without Development

A variant of the NeuroEvolution of Augmenting Topologies (NEAT) method, called CPPN-NEAT, evolves increasingly complex CPPNs, producing patterns with strikingly natural characteristics.
...