Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

  title={Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions},
  author={Anil Yaman and Decebal Constantin Mocanu and Giovanni Iacca and Matt Coler and G. Fletcher and Mykola Pechenizkiy},
  journal={Evolutionary Computation},
Abstract A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to… 
Evolution of Biologically Inspired Learning in Artificial Neural Networks
The final author version and the galley proof are versions of the publication after peer review that features the final layout of the paper including the volume, issue and page numbers.
Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning
The results show that rules satisfying the genomics bottleneck adapt to out-of-distribution tasks better than previous model-based and plasticity-based meta-learning with verbose meta-parameters.
Minimal neural network models for permutation invariant agents
This work constructs a conceptually simple model that exhibit flexibility most ANNs lack, and demonstrates the model's properties on multiple control problems, and shows that it can cope with even very rapid permutations of input indices, as well as changes in input size.
Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization
The experiment results demonstrate the unique advantage of EPRNN compared to state-of-the-arts based on plasticity and recursion while yielding comparably good performance against deep learning based approaches in the tasks.
An Overview of Neuromorphic Computing for Artificial Intelligence Enabled Hardware-Based Hopfield Neural Network
This paper presents a comprehensive review and focuses extensively on the Hopfield algorithm’s model and its potential advancement in new research applications to facilitate developers with a better understanding of the aforementioned model in accordance to build their own artificial intelligence projects.
The intersection of evolutionary computation and explainable AI
It is suggested that the EC community may play a major role in the achievement of XAI, and there are still several research opportunities and open research questions that may promote a safer and broader adoption of EC in real-world applications.


Indirectly Encoding Neural Plasticity as a Pattern of Local Rules
This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution.
Learning with delayed synaptic plasticity
This work proposes the use of neuron activation traces (NATs) to provide additional data storage in each synapse to keep track of the activation of the neurons and employs genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals.
The evolution of a generalized neural learning rule
  • Jeff Orchard, L. Wang
  • Computer Science, Biology
    2016 International Joint Conference on Neural Networks (IJCNN)
  • 2016
This paper aims to evolve a more general learning rule, and since neural networks are so versatile, it construct the learning function itself out of a neural network.
Phenotypic plasticity in evolving neural networks
It is shown how a model based on genetic algorithm and neural networks is able to evolve control systems for autonomous robots that can adapt to different types of environments.
On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks
The results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks, and reveal the consequence of the bias of developmental encodings towards regular structures.
Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning.
The results suggest that reward-modulated synaptic plasticity can not only optimize the network parameters for specific computational tasks, but also initiate a functional rewiring that re-programs microcircuit, thereby generating diverse computational functions in different generic cortical microcircuits.
Evolutionary Advantages of Neuromodulated Plasticity in Dynamic, Reward-based Scenarios
It is concluded that modulatory neurons evolve autonomously in the proposed learning tasks, allowing for increased learning and memory capabilities.
Neuroevolution: from architectures to learning
This paper gives an overview of the most prominent methods for evolving ANNs with a special focus on recent advances in the synthesis of learning architectures.