Catastrophic Forgetting, Rehearsal and Pseudorehearsal


This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution. We replicate some of the experiments described by Ratcliff (1990), including those relating to a simple “recency” based rehearsal regime. We then develop further rehearsal regimes which are more effective than recency rehearsal. In particular “sweep rehearsal” is very successful at minimising catastrophic forgetting. One possible limitation of rehearsal in general, however, is that previously learned information may not be available for retraining. We describe a solution to this problem, “pseudorehearsal”, a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself. We then suggest an interpretation of these rehearsal mechanisms in the context of a function approximation based account of neural network learning. Both rehearsal and pseudorehearsal may have practical applications, allowing new information to be integrated into an existing network with minimum disruption of old information.

DOI: 10.1080/09540099550039318

Extracted Key Phrases

13 Figures and Tables

Citations per Year

227 Citations

Semantic Scholar estimates that this publication has 227 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Robins1995CatastrophicFR, title={Catastrophic Forgetting, Rehearsal and Pseudorehearsal}, author={Anthony V. Robins}, journal={Connect. Sci.}, year={1995}, volume={7}, pages={123-146} }