• Corpus ID: 246015795

Logarithmic Continual Learning

@article{Masarczyk2022LogarithmicCL,
  title={Logarithmic Continual Learning},
  author={Wojciech Masarczyk and Pawel Wawrzy'nski and Daniel Marczak and Kamil Deja and Tomasz Trzci'nski},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.06534}
}
We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models. In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a single task at a time. To replay previous samples, contemporary CL methods bootstrap generative models and train them recursively with a combination of current and regenerated past data. This recurrence leads to… 

References

SHOWING 1-10 OF 41 REFERENCES
Generative replay with feedback connections as a general strategy for continual learning
TLDR
This work reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback or backward connections and believes this to be an important first step towards making the powerful technique ofGenerative replay scalable to real-world continual learning applications.
BinPlay: A Binary Latent Autoencoder for Generative Replay Continual Learning
TLDR
This paper introduces a novel generative rehearsal approach called BinPlay, which main objective is to find a quality-preserving encoding of past samples into precomputed binary codes living in the autoencoder’s binary latent space.
Continual learning with hypernetworks
TLDR
Insight is provided into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and it is shown that task-conditioned hypernetworks demonstrate transfer learning.
Continual Learning via Neural Pruning
TLDR
Continual Learning via Neural Pruning is introduced, a new method aimed at lifelong learning in fixed capacity models based on neuronal model sparsification, and the concept of graceful forgetting is formalized and incorporated.
Online Continual Learning with Maximally Interfered Retrieval
TLDR
This work considers a controlled sampling of memories for replay, and shows a formulation for this sampling criterion in both the generative replay and the experience replay setting, producing consistent gains in performance and greatly reduced forgetting.
Three scenarios for continual learning
TLDR
Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario.
Continual Learning with Deep Generative Replay
TLDR
The Deep Generative Replay is proposed, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"), with only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task.
Continual Unsupervised Representation Learning
TLDR
The proposed approach (CURL) performs task inference directly within the model, is able to dynamically expand to capture new concepts over its lifetime, and incorporates additional rehearsal-based techniques to deal with catastrophic forgetting.
Experience Replay for Continual Learning
TLDR
This work shows that using experience replay buffers for all past events with a mixture of on- and off-policy learning can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities.
Gradient Projection Memory for Continual Learning
TLDR
This work proposes a novel approach where a neural network learns new tasks by taking gradient steps in the orthogonal direction to the gradient subspaces deemed important for the past tasks, and shows that this induces minimum to no interference with thepast tasks, thereby mitigates forgetting.
...
...