• Corpus ID: 220363960

Meta-Learning through Hebbian Plasticity in Random Networks

@article{Najarro2020MetaLearningTH,
  title={Meta-Learning through Hebbian Plasticity in Random Networks},
  author={Elias Najarro and Sebastian Risi},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.02686}
}
Lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning (RL) approaches have shown significant progress in solving complex tasks, however once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. While it is still not completely understood how biological brains learn and adapt so efficiently from experience, it is believed that synaptic plasticity plays a prominent… 

Figures and Tables from this paper

Evolving and merging hebbian learning rules: increasing generalization by decreasing the number of rules
TLDR
It is shown that by allowing multiple connections in the network to share the same local learning rule, it is possible to drastically reduce the number of trainable parameters, while obtaining a more robust agent.
Stable Lifelong Learning: Spiking neurons as a solution to instability in plastic neural networks
TLDR
This work demonstrates that utilizing plasticity together with ANNs leads to instability beyond the pre-specified lifespan used during training, and presents a solution to this instability through the use of spiking neurons.
Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning
TLDR
It is hypothesized that limiting the number of Hebbian learning rules through a "genomic bottleneck" can act as a regularizer leading to better generalization across changes to the environment.
Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization
TLDR
The experiment results demonstrate the unique advantage of EPRNN compared to state-of-the-arts based on plasticity and recursion while yielding comparably good performance against deep learning based approaches in the tasks.
Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning
TLDR
The results show that rules satisfying the genomics bottleneck adapt to out-of-distribution tasks better than previous model-based and plasticity-based meta-learning with verbose meta-parameters.
Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules
TLDR
This analysis is the first to identify the reason for this generalization gap between artificial and biologically-plausible learning rules, which can help guide future investigations into how the brain learns solutions that generalize.
Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning
TLDR
This model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks, and suggests that all learning mechanisms may complement one another, accelerating the learning capabilities of animals.
HyperNCA: Growing Developmental Networks with Neural Cellular Automata
TLDR
It is shown that the HyperNCA method can grow neural networks capable of solving common reinforcement learning tasks, and how the same approach can be used to build developmental metamorphosis networks incapable of transforming their weights to solve variations of the initial RL task.
A Survey on Deep Reinforcement Learning-based Approaches for Adaptation and Generalization
TLDR
A survey on the recent developments in DRL-based approaches for adaptation and generalization is presented and future research directions through which DRL algorithms’ adaptability and generalizability can be enhanced and potentially make them applicable to a broad range of real-world problems.
A Modern Self-Referential Weight Matrix That Learns to Modify Itself
TLDR
A scalable self-referential WM (SRWM) that learns to use outer products and the delta update rule to modify itself and is evaluated in supervised few-shot learning and in multi-task reinforcement learning with procedurally generated game environments.
...
...

References

SHOWING 1-10 OF 71 REFERENCES
A critique of pure learning and what artificial neural networks can learn from animal brains
  • A. Zador
  • Biology, Computer Science
    Nature Communications
  • 2019
TLDR
It is suggested that for AI to learn from animal brains, it is important to consider that animal behaviour results from brain connectivity specified in the genome through evolution, and not due to unique learning algorithms.
Evolving inborn knowledge for fast adaptation in dynamic POMDP problems
TLDR
The analysis of the evolved networks reveals the ability of the proposed algorithm to acquire inborn knowledge in a variety of aspects such as the detection of cues that reveal implicit rewards, and the ability to evolve location neurons that help with navigation.
Neuroevolution of self-interpretable agents
TLDR
It is argued that self-attention has similar properties as indirect encoding, in the sense that large implicit weight matrices are generated from a small number of key-query parameters, thus enabling the agent to solve challenging vision based tasks with at least 1000x fewer parameters than existing methods.
Dota 2 with Large Scale Deep Reinforcement Learning
TLDR
By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.
Grandmaster level in StarCraft II using multi-agent reinforcement learning
TLDR
The agent, AlphaStar, is evaluated, which uses a multi-agent reinforcement learning algorithm and has reached Grandmaster level, ranking among the top 0.2% of human players for the real-time strategy game StarCraft II.
A deep learning framework for neuroscience
TLDR
It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.
Augmenting Supervised Learning by Meta-learning Unsupervised Local Rules
TLDR
It is speculated that some local, unsupervised learning occurs in the brain and it is demonstrated that the addition of local,unsupervised rules to standard backpropagation actually improves the speed and robustness of learning.
Weight Agnostic Neural Networks
TLDR
This work proposes a search method for neural network architectures that can already perform a task without any explicit weight training, and demonstrates that this method can find minimal neural network architecture that can perform several reinforcement learning tasks without weight training.
Brain and Intelligence in Vertebrates
TLDR
A comprehensive survey of comparative studies of brain organization, brain size, and intellectual capacity in vertebrates concludes that vertebrates have a higher intellectual capacity than previously thought.
...
...