Catastrophic forgetting in connectionist networks

@article{French1999CatastrophicFI,
  title={Catastrophic forgetting in connectionist networks},
  author={Robert M. French},
  journal={Trends in Cognitive Sciences},
  year={1999},
  volume={3},
  pages={128-135}
}
  • R. French
  • Published 1 April 1999
  • Computer Science
  • Trends in Cognitive Sciences

Figures from this paper

Mitigate Catastrophic Forgetting by Varying Goals

It is found that varying goals can improve catastrophic forgetting in a CIFAR-10 based classification problem and that when learning a large set of goals, a relatively small switching interval is required to have the advantage of mitigating catastrophic forgetting.

The Evolution of Minimal Catastrophic Forgetting in Neural Systems

This paper aims to show how simulated evolution can be used to generate neural network models with significantly less catastrophic forgetting than traditionally formulated models, and presents a series of simulation results.

Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

A dual-network architecture is developed in which self-generated pseudopatterns reflect (non-temporally) all the sequences of temporally ordered items previously learned.

Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting

A new kind of connectionist architecture is proposed, the Sequential Neural Coding Network, that is robust to forgetting when learning from streams of data points and, unlike networks of today, does not learn via the immensely popular back-propagation of errors.

BRAIN-LIKE REPLAY FOR CONTINUAL LEARNING WITH ARTIFICIAL NEURAL NETWORKS

This work proposes a new, more brain-like variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections, and achieves acceptable performance on the challenging problem of class-incremental learning on natural images without relying on stored data.

Natural Way to Overcome the Catastrophic Forgetting in Neural Networks

This paper proposes an alternative method of overcoming catastrophic forgetting based on the total absolute signal passing through each connection in the network, which has a simple implementation and seems to us essentially close to the processes occurring in the brain of animals to preserve previously learned skills during subsequent learning.

Adaptation of Artificial Neural Networks Avoiding Catastrophic Forgetting

The results show that the combination of the proposed approaches mitigates the catastrophic forgetting effects, and always outperforms the use of the classical transformations in the feature space.

Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping

This work shows that RMNs learn an optimized representational overlap that overcomes the twin problem of catastrophic forgetting and remembering, and achieves state-of-the-art performance across many common continual learning benchmarks.

Sequential Learning in Distributed Neural Networks without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory Can Do It

Simulations of sequential learning tasks show that the proposed single self-refreshing memory based on a single-network architecture that can learn its own production reflecting its history has the ability to avoid catastrophic forgetting.
...

References

SHOWING 1-10 OF 64 REFERENCES

Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks

A simple algorithm, called activation sharpening, is presented that allows a standard feed-forward backpropagation network to develop semi-distributed representations, thereby reducing the problem of catastrophic forgetting.

Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks

A simple algorithm is presented that allows a standard feedforward backpropagation network to develop semi-distributed representations, thereby significantly reducing the problem of catastrophic forgetting.

Catastrophic Forgetting, Rehearsal and Pseudorehearsal

A solution to the problem of catastrophic forgetting in neural networks is described, 'pseudorehearsal', a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself.

Catastrophic Forgetting and the Pseudorehearsal Solution in Hopfield-type Networks

This paper extends the exploration of pseudorehearsal to a Hopfield-type net, and shows that the extra attractors created in state space during learning can in fact be useful in preserving the learned population.

Catastrophic Interference is Eliminated in Pretrained Networks

When modeling strictly sequential experimental memory tasks, such as serial list learning, connectionist networks appear to experience excessive retroactive interference, known as catastrophic

Consolidation in Neural Networks and in the Sleeping Brain

It is suggested that the catastrophic forgetting problem in artificial neural networks (ANNs) is a problem that has actually occurred in the evolution of the mammalian brain, and that the pseudorehearsal solution to the problem in ANNs is functionally equivalent to the sleep consolidation solution adopted by the brain.

An Analysis of Catastrophic Interference

The conclusion is that, on the one hand, approximations to 'ideal' network geometries can entirely alleviate interference if the training data sets have been generated from a learnable function (not arbitrary pattern associations), but this elimination of interference comes with cost of a breakdown in discrimination between input patterns that have been learned and those that have not: catastrophic remembering.

Pseudo-recurrent Connectionist Networks: An Approach to the 'Sensitivity-Stability' Dilemma

A 'pseudo-recurrent' memory model is presented here that partitions a connectionist network into two functionally distinct, but continually interacting areas: one area serves as a final-storage area for representations; the other is an early-processing area where new representations are processed.

Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.

  • R. Ratcliff
  • Psychology, Computer Science
    Psychological review
  • 1990
The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning.

Human Category Learning: Implications for Backpropagation Models

It is demonstrated that a standard version of backprop fails to attend selectively to input dimensions in the same way as humans, suffers catastrophic forgetting of previously learned associations when novel exemplars are trained, and can be overly sensitive to linear category boundaries.
...