Evolving and merging hebbian learning rules: increasing generalization by decreasing the number of rules

@article{Pedersen2021EvolvingAM,
  title={Evolving and merging hebbian learning rules: increasing generalization by decreasing the number of rules},
  author={Joachim Winther Pedersen and Sebastian Risi},
  journal={Proceedings of the Genetic and Evolutionary Computation Conference},
  year={2021}
}
Generalization to out-of-distribution (OOD) circumstances after training remains a challenge for artificial agents. To improve the robustness displayed by plastic Hebbian neural networks, we evolve a set of Hebbian learning rules, where multiple connections are assigned to a single rule. Inspired by the biological phenomenon of the genomic bottleneck, we show that by allowing multiple connections in the network to share the same local learning rule, it is possible to drastically reduce the… 

Figures and Tables from this paper

Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning
TLDR
The results show that rules satisfying the genomics bottleneck adapt to out-of-distribution tasks better than previous model-based and plasticity-based meta-learning with verbose meta-parameters.
Minimal neural network models for permutation invariant agents
TLDR
This work constructs a conceptually simple model that exhibit flexibility most ANNs lack, and demonstrates the model's properties on multiple control problems, and shows that it can cope with even very rapid permutations of input indices, as well as changes in input size.
HyperNCA: Growing Developmental Networks with Neural Cellular Automata
TLDR
It is shown that the HyperNCA method can grow neural networks capable of solving common reinforcement learning tasks, and how the same approach can be used to build developmental metamorphosis networks incapable of transforming their weights to solve variations of the initial RL task.
Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization
TLDR
The experiment results demonstrate the unique advantage of EPRNN compared to state-of-the-arts based on plasticity and recursion while yielding comparably good performance against deep learning based approaches in the tasks.
Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning
TLDR
It is hypothesized that limiting the number of Hebbian learning rules through a "genomic bottleneck" can act as a regularizer leading to better generalization across changes to the environment.

References

SHOWING 1-10 OF 69 REFERENCES
Thinking in circuits: toward neurobiological explanation in cognitive neuroscience
TLDR
It is shown that DNA/TC theory of cognition offers an integrated explanatory perspective on brain mechanisms of perception, action, language, attention, memory, decision and conceptual thought and it is suggested that the ability of building DNAs/TCs spread out over different cortical areas is the key mechanism for a range of specifically human sensorimotor, linguistic and conceptual capacities.
Meta-Learning through Hebbian Plasticity in Random Networks
TLDR
This work proposes a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning
Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking
TLDR
Tonic is introduced, a Python library allowing researchers to quickly implement new ideas and measure their importance by providing a collection of configurable modules such as exploration strategies, replays, neural networks, and updaters.
Evolving hypernetworks for game-playing agents
TLDR
For some Atari games, a hypernetwork with over 14 times fewer parameters, can compete or even outperform directly-encoded policy networks and outperforms complicated deep reinforcement learning setups such as Rainbow.
Evolving inborn knowledge for fast adaptation in dynamic POMDP problems
TLDR
The analysis of the evolved networks reveals the ability of the proposed algorithm to acquire inborn knowledge in a variety of aspects such as the detection of cues that reveal implicit rewards, and the ability to evolve location neurons that help with navigation.
2011
Peoples’ histories have been destroyed at the times of traumatic events in conflicts and wars. In the last decade, we have witnessed a radical transformation of cities in the Middle East and North
Scaling MAP-Elites to deep neuroevolution
TLDR
A new hybrid algorithm called MAP-Elites with Evolution Strategies (ME-ES) is designed and evaluated for post-damage recovery in a difficult high-dimensional control task where traditional ME fails, and it is shown that ME-ES performs efficient exploration, on par with state-of-the-art exploration algorithms in high- dimensional control tasks with strongly deceptive rewards.
Network of Evolvable Neural Units: Evolving to Learn at a Synaptic Level
TLDR
An Evolvable Neural Unit (ENU) is proposed that can approximate the function of each individual neuron and synapse and it is demonstrated that this type of unit can be evolved to mimic Integrate-And-Fire neurons and synaptic Spike-Timing-Dependent Plasticity.
Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck
TLDR
This work proposes Selective Noise Injection (SNI), which maintains the regularizing effect the injected noise has, while mitigating the adverse effects it has on the gradient quality, and demonstrates that the Information Bottleneck is a particularly well suited regularization technique for RL as it is effective in the low-data regime encountered early on in training RL agents.
...
...