Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks

@article{Soltoggio2017BornTL,
  title={Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks},
  author={Andrea Soltoggio and Kenneth O. Stanley and Sebastian Risi},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2017},
  volume={108},
  pages={
          48-67
        }
}

Figures from this paper

Evolution of Biologically Inspired Learning in Artificial Neural Networks

The final author version and the galley proof are versions of the publication after peer review that features the final layout of the paper including the volume, issue and page numbers.

Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

This work uses a discrete representation to encode the learning rules in a finite search space, and employs genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings.

Differentiable plasticity: training plastic neural networks with backpropagation

It is shown that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections, and it is concluded that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.

EVOLVING SELF-TAUGHT NEURAL NETWORKS: THE BALDWIN EFFECT AND THE EMERGENCE OF INTELLIGENCE

Experimental results show that the interaction between evolution and the ability to teach oneself in self-taught neural networks outperform evolution and self-teaching alone.

Evolving Self-taught Neural Networks: The Baldwin Effect and the Emergence of Intelligence

  • N. Le
  • Computer Science
    ArXiv
  • 2019
Experimental results show that the interaction between evolution and the ability to teach oneself in self-taught neural networks outperform evolution and self-teaching alone.

Reinforcement Learning for Central Pattern Generation in Dynamical Recurrent Neural Networks

This work extends one of the best-studied and most-commonly used dynamic recurrent neural networks to incorporate the reinforcement learning mechanism, and demonstrates that this extended dynamical system (model and learning mechanism) can autonomously learn to perform a central pattern generation task.

Designing neural networks through neuroevolution

This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization

The experiment results demonstrate the unique advantage of EPRNN compared to state-of-the-arts based on plasticity and recursion while yielding comparably good performance against deep learning based approaches in the tasks.

University of Groningen Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

This work employs genetic algorithms to evolve local learning rules, from Hebbian perspective, to produce autonomous learning under changing environmental conditions, and shows that evolved plasticity rules are highly efficient at adapting the ANNs to task under changing environments.

Meta-Learning through Hebbian Plasticity in Random Networks

This work proposes a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent.
...

References

SHOWING 1-10 OF 353 REFERENCES

On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks

The results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks, and reveal the consequence of the bias of developmental encodings towards regular structures.

Evolving plastic neural networks with novelty search

This article analyzes this inherent deceptiveness in a variety of different dynamic, reward-based learning tasks, and proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm, which avoids deception entirely.

Evolution of Cartesian Genetic Programs for Development of Learning Neural Architecture

The goal of this paper is to use evolutionary approaches to find suitable computational functions that are analogous to natural sub-components of biological neurons and demonstrate that intelligent behavior can be produced as a result of this additional biological plausibility.

A unified approach to evolving plasticity and neural geometry

The most interesting aspect of this investigation is that the emergent neural structures are beginning to acquire more natural properties, which means that neuroevolution can begin to pose new problems and answer deeper questions about how brains evolved that are ultimately relevant to the field of AI as a whole.

Differentiable plasticity: training plastic neural networks with backpropagation

It is shown that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections, and it is concluded that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.

Indirectly Encoding Neural Plasticity as a Pattern of Local Rules

This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution.

Learning to Adapt to Changing Environments in Evolving Neural Networks

A genetic algorithm is used to simulate the evolution of a population of neural networks, each controlling the behavior of a small mobile robot that must explore efficiently an environment surrounded by walls, and found that evolved networks incorporate a genetically inherited predisposition to learn.

Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies

This book offers a comprehensive introduction to the emerging field of biologically inspired artificial intelligence that can be used as an upper-level text or as a reference for researchers.

The Evolution of Learning: An Experiment in Genetic Connectionism

Evolutionary and Computational Advantages of Neuromodulated Plasticity

The experiments demonstrated that modulatory neurons provide an evolutionary advantage that increases with the complexity of the control problem, and the important role of neuromodulated plasticity for the evolution of networks that require temporal neural dynamics, adaptivity and memory functions is suggested.
...