Corpus ID: 4714955

Differentiable plasticity: training plastic neural networks with backpropagation

@article{Miconi2018DifferentiablePT,
  title={Differentiable plasticity: training plastic neural networks with backpropagation},
  author={Thomas Miconi and J. Clune and Kenneth O. Stanley},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.02464}
}
How can we build agents that keep learning from experience, quickly and efficiently, after their initial training. [...] Key Method First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional 1000+ pixels natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task. Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot…Expand

Paper Mentions

Learning to Learn with Feedback and Local Plasticity
Interest in biologically inspired alternatives to backpropagation is driven by the desire to both advance connections between deep learning and neuroscience and address backpropagation's shortcomingsExpand
SpikePropamine: Differentiable Plasticity in Spiking Neural Networks
TLDR
The experimental results display that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks that a traditional SNN fails to solve, even in the presence of significant noise. Expand
Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization
  • Fan Wang, Hao Tian, +4 authors Haifeng Wang
  • Computer Science
  • ArXiv
  • 2021
While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and biological neural networks (BNNs). In thisExpand
Learning with Plasticity Rules: Generaliza- tion and Robustness
  • 2020
Brains learn robustly, and generalize effortlessly between different learning tasks; in contrast, robustness and generalization across tasks are well known weaknesses of artificial neural netsExpand
Enabling Continual Learning with Differentiable Hebbian Plasticity
TLDR
A Differentiable Hebbian Consolidation model which is composed of a DHP Softmax layer that adds a rapid learning plastic component to the fixed parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale is proposed. Expand
A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network
TLDR
This work is a proof of principle of an automated and unbiased approach to unveil synaptic plasticity rules that obey biological constraints and can solve complex functions. Expand
Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks
TLDR
A variety of inspiring ideas are brought together that define the field of Evolved Plastic Artificial Neural Networks, which may include a large variety of different neuron types and dynamics, network architectures, plasticity rules, and other factors. Expand
Learning to solve the credit assignment problem
TLDR
A hybrid learning approach that learns to approximate the gradient, and can match or the performance of exact gradient-based learning in both feedforward and convolutional networks. Expand
Learning to Continually Learn
TLDR
A Neuromodulated Meta-Learning Algorithm (ANML) enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates). Expand
Designing neural networks through neuroevolution
TLDR
This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks
TLDR
A variety of inspiring ideas are brought together that define the field of Evolved Plastic Artificial Neural Networks, which may include a large variety of different neuron types and dynamics, network architectures, plasticity rules, and other factors. Expand
Learning to reinforcement learn
TLDR
This work introduces a novel approach to deep meta-reinforcement learning, which is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. Expand
Learning a synaptic learning rule
TLDR
An original approach to neural modeling based on the idea of searching, with learning methods, for a synaptic learning rule which is biologically plausible and yields networks that are able to learn to perform difficult tasks is discussed. Expand
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
TLDR
This paper proposes to represent a "fast" reinforcement learning algorithm as a recurrent neural network (RNN) and learn it from data, encoded in the weights of the RNN, which are learned slowly through a general-purpose ("slow") RL algorithm. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Human-level control through deep reinforcement learning
TLDR
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. Expand
Evolutionary Advantages of Neuromodulated Plasticity in Dynamic, Reward-based Scenarios
TLDR
It is concluded that modulatory neurons evolve autonomously in the proposed learning tasks, allowing for increased learning and memory capabilities. Expand
Asynchronous Methods for Deep Reinforcement Learning
TLDR
A conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers and shows that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input. Expand
A Simple Neural Attentive Meta-Learner
TLDR
This work proposes a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. Expand
One-shot Learning with Memory-Augmented Neural Networks
TLDR
The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated. Expand
...
1
2
3
4
...