Neural networks with a self-refreshing memory: Knowledge transfer in sequential learning tasks without catastrophic forgetting

@article{Ans2000NeuralNW,
  title={Neural networks with a self-refreshing memory: Knowledge transfer in sequential learning tasks without catastrophic forgetting},
  author={Bernard Ans and St{\'e}phane Rousset},
  journal={Connection Science},
  year={2000},
  volume={12},
  pages={1 - 19}
}
We explore a dual-network architecture with self-refreshing memory (Ans and Rousset 1997) which overcomes catastrophic forgetting in sequential learning tasks. [] Key Result We show that transfer ...

Sequential Learning in Distributed Neural Networks without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory Can Do It

TLDR
Simulations of sequential learning tasks show that the proposed single self-refreshing memory based on a single-network architecture that can learn its own production reflecting its history has the ability to avoid catastrophic forgetting.

Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

TLDR
A dual-network architecture is developed in which self-generated pseudopatterns reflect (non-temporally) all the sequences of temporally ordered items previously learned.

Artificial neural networks whispering to the brain: nonlinear system attractors induce familiarity with never seen items

TLDR
Humans are sensitive to the particular type of information that allows distributed artificial neural networks to dynamically maintain their memory, and this information does not amount to the exemplars used to train the network that produced the attractors.

Theoretical Understanding of the Information Flow on Continual Learning Performance

TLDR
A probabilistic framework to analyze information flow through layers in networks for task sequences and its impact on learning performance is established to optimize the information preservation between layers while learning new tasks to manage task-specific knowledge passing throughout the layers while maintaining model performance on previous tasks.

Short- and Long-term Memory: A Complementary Dual-network Memory Model

TLDR
This thesis implements a more biologically plausible dual-network memory model, and a novel memory consolidation scheme that illuminates several interesting emergent qualities of pattern extraction by chaotic recall in the attained hippocampal model.

Creating False Memories in Humans with an Artificial Neural Network: Implications for Theories of Memory Consolidation

TLDR
Whether false memories of never seen (target) items can be created in humans by exposure to pseudo-patterns generated from random input in an artificial neural network is checked and indicates that humans, like distributed neural networks, are able to make use of the information the memory self-refreshing mechanism is based upon.

Sequential learning in neural networks: A review and a discussion of pseudorehearsal based methods

  • A. Robins
  • Computer Science
    Intell. Data Anal.
  • 2004
TLDR
This review explores the topic of sequential learning, where information to be learned and retained arrives in separate episodes over time, in the context of artificial neural networks, and examines the pseudorehearsal mechanism, which is an effective solution to the catastrophic forgetting problem in back propagation type networks.

Neural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skills

TLDR
It is suggested that encouraging modularity in neural networks may help to overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.
...

References

SHOWING 1-10 OF 46 REFERENCES

Catastrophic Forgetting and the Pseudorehearsal Solution in Hopfield-type Networks

TLDR
This paper extends the exploration of pseudorehearsal to a Hopfield-type net, and shows that the extra attractors created in state space during learning can in fact be useful in preserving the learned population.

Catastrophic forgetting in connectionist networks

  • R. French
  • Computer Science
    Trends in Cognitive Sciences
  • 1999

Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks

TLDR
A simple algorithm, called activation sharpening, is presented that allows a standard feed-forward backpropagation network to develop semi-distributed representations, thereby reducing the problem of catastrophic forgetting.

Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.

  • R. Ratcliff
  • Psychology, Computer Science
    Psychological review
  • 1990
TLDR
The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning.

Catastrophic Interference is Eliminated in Pretrained Networks

When modeling strictly sequential experimental memory tasks, such as serial list learning, connectionist networks appear to experience excessive retroactive interference, known as catastrophic

Consolidation in Neural Networks and in the Sleeping Brain

TLDR
It is suggested that the catastrophic forgetting problem in artificial neural networks (ANNs) is a problem that has actually occurred in the evolution of the mammalian brain, and that the pseudorehearsal solution to the problem in ANNs is functionally equivalent to the sleep consolidation solution adopted by the brain.

Pseudo-recurrent Connectionist Networks: An Approach to the 'Sensitivity-Stability' Dilemma

TLDR
A 'pseudo-recurrent' memory model is presented here that partitions a connectionist network into two functionally distinct, but continually interacting areas: one area serves as a final-storage area for representations; the other is an early-processing area where new representations are processed.

Catastrophic Forgetting, Rehearsal and Pseudorehearsal

TLDR
A solution to the problem of catastrophic forgetting in neural networks is described, 'pseudorehearsal', a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself.

Sparse Distributed Memory

TLDR
Pentti Kanerva's Sparse Distributed Memory presents a mathematically elegant theory of human long term memory that resembles the cortex of the cerebellum, and provides an overall perspective on neural systems.