Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

@article{Ans2004SelfrefreshingMI,
  title={Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting},
  author={Bernard Ans and St{\'e}phane Rousset and Robert M. French and Serban C. Musca},
  journal={Connection Science},
  year={2004},
  volume={16},
  pages={71 - 99}
}
While humans forget gradually, highly distributed connectionist networks forget catastrophically: newly learned information often completely erases previously learned information. This is not just implausible cognitively, but disastrous practically. However, it is not easy in connectionist cognitive modelling to keep away from highly distributed neural networks, if only because of their ability to generalize. A realistic and effective system that solves the problem of catastrophic interference… 

Sequential Learning in Distributed Neural Networks without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory Can Do It

TLDR
Simulations of sequential learning tasks show that the proposed single self-refreshing memory based on a single-network architecture that can learn its own production reflecting its history has the ability to avoid catastrophic forgetting.

Creating False Memories in Humans with an Artificial Neural Network: Implications for Theories of Memory Consolidation

TLDR
Whether false memories of never seen (target) items can be created in humans by exposure to pseudo-patterns generated from random input in an artificial neural network is checked and indicates that humans, like distributed neural networks, are able to make use of the information the memory self-refreshing mechanism is based upon.

Enabling Continual Learning with Differentiable Hebbian Plasticity

TLDR
A Differentiable Hebbian Consolidation model which is composed of a DHP Softmax layer that adds a rapid learning plastic component to the fixed parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale is proposed.

Di ff erentiable Hebbian Consolidation for Continual Lifelong Learning

TLDR
A Differentiable Hebbian Consolidation model which replaces the traditional softmax layer with a Differentiable hebbian Plasticity (DHP) Softmax that adds a fast learning plastic component to the fixed (slowly changing) parameters of the softmax output layer that outperforms comparable baselines by reducing forgetting.

Self-Refreshing SOM as a Semantic Memory Model

TLDR
Simulations comparing the performance of a self-refreshing SOM compared to a standard SOM are presented in the task of learning three separate sets of data adjacently with results showing that the use of pseudorehearsal can effectively decrease catastrophic forgetting.

Continual Learning Using World Models for Pseudo-Rehearsal

TLDR
This work proposes a method to continually learn these internal world models through the interleaving of internally generated episodes of past experiences (i.e., pseudo-rehearsal), and shows that modern policy gradient based reinforcement learning algorithms can use this internal model to continual learn to optimize reward based on the world model's representation of the environment.

Reduction of catastrophic forgetting with transfer learning and ternary output codes

TLDR
This work examines how training a neural net in accordance with latently learned output encodings drastically reduces catastrophic forgetting, which results in a technique that makes it easier rather than harder to learn new tasks while retaining existing knowledge.

Using World Models for Pseudo-Rehearsal in Continual Learning

TLDR
This work proposes a method to continually learn internal world models through the interleaving of internally generated rollouts from past experiences, and shows this method can sequentially learn unsupervised temporal prediction, without task labels, in a disparate set of Atari games.

Sequential learning in neural networks: A review and a discussion of pseudorehearsal based methods

  • A. Robins
  • Computer Science
    Intell. Data Anal.
  • 2004
TLDR
This review explores the topic of sequential learning, where information to be learned and retained arrives in separate episodes over time, in the context of artificial neural networks, and examines the pseudorehearsal mechanism, which is an effective solution to the catastrophic forgetting problem in back propagation type networks.

Differentiable Hebbian Plasticity for Continual Learning

TLDR
A Differentiable Hebbian Plasticity Softmax layer is proposed which adds a fast learning plastic component to the slow weights of the softmax output layer which behaves as a compressed episodic memory that reactivates existing memory traces, while creating new ones.
...

References

SHOWING 1-10 OF 63 REFERENCES

Catastrophic forgetting in connectionist networks

  • R. French
  • Computer Science
    Trends in Cognitive Sciences
  • 1999

Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks

TLDR
A simple algorithm, called activation sharpening, is presented that allows a standard feed-forward backpropagation network to develop semi-distributed representations, thereby reducing the problem of catastrophic forgetting.

Catastrophic Interference is Eliminated in Pretrained Networks

When modeling strictly sequential experimental memory tasks, such as serial list learning, connectionist networks appear to experience excessive retroactive interference, known as catastrophic

Avoiding catastrophic forgetting by coupling two reverberating neural networks

TLDR
This work proposes a two-network architecture in which new items are learned by ajrst network concurrently with internal pseudo-item5 originatingfi om a second network, and implements a refiesbing mechanism using the oki information.

Catastrophic Forgetting and the Pseudorehearsal Solution in Hopfield-type Networks

TLDR
This paper extends the exploration of pseudorehearsal to a Hopfield-type net, and shows that the extra attractors created in state space during learning can in fact be useful in preserving the learned population.

Pseudo-recurrent Connectionist Networks: An Approach to the 'Sensitivity-Stability' Dilemma

TLDR
A 'pseudo-recurrent' memory model is presented here that partitions a connectionist network into two functionally distinct, but continually interacting areas: one area serves as a final-storage area for representations; the other is an early-processing area where new representations are processed.

Catastrophic Forgetting, Rehearsal and Pseudorehearsal

TLDR
A solution to the problem of catastrophic forgetting in neural networks is described, 'pseudorehearsal', a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself.

Incremental sequence learning

TLDR
An alternative model based on Maskara & Noetzel's (1991) Auto-Associative Recurrent Network is suggested as a way to overcome the SRN model’s failure to account for human performance in several experimental situations meant to test the model's specific predictions.

Sparse Distributed Memory

TLDR
Pentti Kanerva's Sparse Distributed Memory presents a mathematically elegant theory of human long term memory that resembles the cortex of the cerebellum, and provides an overall perspective on neural systems.
...