Catastrophic Forgetting, Rehearsal and Pseudorehearsal

@article{Robins1995CatastrophicFR,
  title={Catastrophic Forgetting, Rehearsal and Pseudorehearsal},
  author={Anthony V. Robins},
  journal={Connect. Sci.},
  year={1995},
  volume={7},
  pages={123-146}
}
  • A. Robins
  • Published 1 June 1995
  • Computer Science
  • Connect. Sci.
This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution. [] Key Method We then develop further rehearsal regimes which are more effective than recency rehearsal.
Catastrophic forgetting in connectionist networks
  • R. French
  • Computer Science
    Trends in Cognitive Sciences
  • 1999
Catastrophic Forgetting and the Pseudorehearsal Solution in Hopfield-type Networks
TLDR
This paper extends the exploration of pseudorehearsal to a Hopfield-type net, and shows that the extra attractors created in state space during learning can in fact be useful in preserving the learned population.
GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal
TLDR
This work improves the generator by introducing a second discriminator into the Generative Adversarial Network which learns to classify between real and fake items from the intermediate activation patterns that they produce when fed through a continual learning agent.
Avoiding catastrophic forgetting by coupling two reverberating neural networks
TLDR
This work proposes a two-network architecture in which new items are learned by ajrst network concurrently with internal pseudo-item5 originatingfi om a second network, and implements a refiesbing mechanism using the oki information.
Using World Models for Pseudo-Rehearsal in Continual Learning
TLDR
This work proposes a method to continually learn internal world models through the interleaving of internally generated rollouts from past experiences, and shows this method can sequentially learn unsupervised temporal prediction, without task labels, in a disparate set of Atari games.
Beneficial Effect of Combined Replay for Continual Learning
TLDR
The combined replay approach consists of a hybrid architecture that generates pseudo-samples through a reinjection sampling procedure (i.e. iterative sampling) and employs the data stored in tiny memory buffers as seeds to enhance the pseudo-sample generation process.
Continual Learning Using World Models for Pseudo-Rehearsal
TLDR
This work proposes a method to continually learn these internal world models through the interleaving of internally generated episodes of past experiences (i.e., pseudo-rehearsal), and shows that modern policy gradient based reinforcement learning algorithms can use this internal model to continual learn to optimize reward based on the world model's representation of the environment.
Sequential learning in neural networks: A review and a discussion of pseudorehearsal based methods
  • A. Robins
  • Computer Science
    Intell. Data Anal.
  • 2004
TLDR
This review explores the topic of sequential learning, where information to be learned and retained arrives in separate episodes over time, in the context of artificial neural networks, and examines the pseudorehearsal mechanism, which is an effective solution to the catastrophic forgetting problem in back propagation type networks.
Noise, Pseudopatterns, and Information Transfer in the Brain
TLDR
It is shown that a simpler single-network approach that makes use only of noise passing through the network can also significantly reduce catastrophic interference, and it is speculated that this kind of method might be involved in human learning.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 27 REFERENCES
Catastrophic forgetting in neural networks: the role of rehearsal mechanisms
  • A. Robins
  • Computer Science
    Proceedings 1993 The First New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems
  • 1993
TLDR
The author suggests that sweep rehearsal extends the approach of rehearsal mechanisms as far as is practicable, and exposes their eventual limitations.
Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks
TLDR
A simple algorithm, called activation sharpening, is presented that allows a standard feed-forward backpropagation network to develop semi-distributed representations, thereby reducing the problem of catastrophic forgetting.
Catastrophic Interference is Eliminated in Pretrained Networks
When modeling strictly sequential experimental memory tasks, such as serial list learning, connectionist networks appear to experience excessive retroactive interference, known as catastrophic
Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.
  • R. Ratcliff
  • Psychology, Computer Science
    Psychological review
  • 1990
TLDR
The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning.
Competitive Learning: From Interactive Activation to Adaptive Resonance
TLDR
Comparisons are mode between several network models of cognitive processing: competitive learning, interactive activation, adaptive resonance, and back propagation, which suggest different levels of processing and interaction rules for the analysis of word recognition.
Networks of Formal Neurons and Memory Palimpsests
TLDR
A general formulation allows for an exploration of some basic issues in learning theory and two learning schemes are constructed, which avoid the overloading deterioration and keep learning and forgetting, with a stationary capacity.
Neural network models of list learning
TLDR
A neural network model is developed which captures the results of human memory experiments on learning lists of items, and Hopfield–Parisi type neural networks are used to model many of the simpler features of order effects in serial recall.
The ART of adaptive pattern recognition by a self-organizing neural network
TLDR
Art architectures are discussed that are neural networks that self-organize stable recognition codes in real time in response to arbitrary sequences of input patterns, which opens up the possibility of applying ART systems to more general problems of adaptively processing large abstract information sources and databases.
...
1
2
3
...