Indirectly Encoding Neural Plasticity as a Pattern of Local Rules

  title={Indirectly Encoding Neural Plasticity as a Pattern of Local Rules},
  author={Sebastian Risi and Kenneth O. Stanley},
  booktitle={Simulation of Adaptive Behavior},
Biological brains can adapt and learn from past experience. [] Key Method Adaptive HyperNEAT is introduced to allow not only patterns of weights across the connectivity of an ANN to be generated by a function of its geometry, but also patterns of arbitrary learning rules. Several such adaptive models with different levels of generality are explored and compared. The long-term promise of the new approach is to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution.

Towards Evolving More Brain-Like Artificial Neural Networks

The combined approach, adaptive ES-HyperNEAT, unifies for the first time in neuroevolution the abilities to indirectly encode connectivity through geometry, generate patterns of heterogeneous plasticity, and simultaneously encode the density and placement of nodes in space.

Evolution of Biologically Inspired Learning in Artificial Neural Networks

The final author version and the galley proof are versions of the publication after peer review that features the final layout of the paper including the volume, issue and page numbers.

A unified approach to evolving plasticity and neural geometry

The most interesting aspect of this investigation is that the emergent neural structures are beginning to acquire more natural properties, which means that neuroevolution can begin to pose new problems and answer deeper questions about how brains evolved that are ultimately relevant to the field of AI as a whole.

Evolving programs to build artificial neural networks

This chapter evolves a pair of programs that build the network, one of which runs inside neurons and allows them to move, change, die or replicate, and the other is executed inside dendrites and allowing them to change length and weight, be removed, or replicate.

Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

This work uses a discrete representation to encode the learning rules in a finite search space, and employs genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings.

Evolving neural networks

Designing neural networks through neuroevolution

This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.

University of Groningen Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

This work employs genetic algorithms to evolve local learning rules, from Hebbian perspective, to produce autonomous learning under changing environmental conditions, and shows that evolved plasticity rules are highly efficient at adapting the ANNs to task under changing environments.

Evolving neuronal plasticity rules using cartesian genetic programming

This work employs Cartesian genetic programming to evolve biologically plausible human-interpretable plasticity rules that allow a given network to successfully solve tasks from specific task families and demonstrates that the evolved rules perfom competitively with known hand-designed solutions.

Meta-Learning through Hebbian Plasticity in Random Networks

This work proposes a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent.



A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Autonomous Evolution of Topographic Regularities in Artificial Neural Networks

This letter shows that when geometry is introduced to evolved ANNs through the hypercube-based neuroevolution of augmenting topologies algorithm, they begin to acquire characteristics that indeed are reminiscent of biological brains.

A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks

The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.

Neuroevolution: from architectures to learning

This paper gives an overview of the most prominent methods for evolving ANNs with a special focus on recent advances in the synthesis of learning architectures.

Evolving Neural Networks through Augmenting Topologies

A method is presented, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task and shows how it is possible for evolution to both optimize and complexify solutions simultaneously.

Neural Plasticity and Minimal Topologies for Reward-Based Learning

  • A. Soltoggio
  • Biology
    2008 Eighth International Conference on Hybrid Intelligent Systems
  • 2008
The results here indicate that reward-based learning in complex dynamic scenarios can be achieved with basic plasticity rules and minimal topologies.

How novelty search escapes the deceptive trap of learning to learn

A way to escape the deceptive trap of static policies based on the novelty search algorithm is proposed, which opens up a new avenue in the evolution of adaptive systems because it can exploit the behavioral difference between learning and non-learning individuals.

Evolutionary Advantages of Neuromodulated Plasticity in Dynamic, Reward-based Scenarios

It is concluded that modulatory neurons evolve autonomously in the proposed learning tasks, allowing for increased learning and memory capabilities.

Competitive Coevolution through Evolutionary Complexification

It is argued that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals and is demonstrated through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures.

A Taxonomy for Artificial Embryogeny

This taxonomy provides a unified context for long-term research in AE, so that implementation decisions can be compared and contrasted along known dimensions in the design space of embryogenic systems, and allows predicting how the settings of various AE parameters affect the capacity to efficiently evolve complex phenotypes.