Measuring Forgetting of Memorized Training Examples

@article{Jagielski2022MeasuringFO,
  title={Measuring Forgetting of Memorized Training Examples},
  author={Matthew Jagielski and Om Thakkar and Florian Tram{\`e}r and Daphne Ippolito and Katherine Lee and Nicholas Carlini and Eric Wallace and Shuang Song and Abhradeep Thakurta and Nicolas Papernot and Chiyuan Zhang},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.00099}
}
Machine learning models exhibit two seemingly contradictory phenomena: training data memorization and various forms of forgetting. In memorization, models overfit specific training examples and become susceptible to privacy attacks. In forgetting, examples which appeared early in training are forgotten by the end. In this work, we connect these phenomena. We propose a technique to measure to what extent models “forget” the specifics of training examples, becoming less susceptible to privacy… 

Figures from this paper

A Survey of Machine Unlearning

TLDR
This survey paper seeks to provide a thorough investigation of machine unlearning in its definitions, scenarios, mechanisms, and applications as a categorical collection of state-of-the-art research for ML researchers as well as those seeking to innovate privacy technologies.

References

SHOWING 1-10 OF 85 REFERENCES

Quantifying Memorization Across Neural Language Models

TLDR
It is found that memorization in LMs is more prevalent than previously believed and will likely get worse as models continues to scale, at least without active mitigations.

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

TLDR
It is shown that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process, and that larger language models memorize training data faster across all settings.

Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping

TLDR
This work shows that RMNs learn an optimized representational overlap that overcomes the twin problem of catastrophic forgetting and remembering, and achieves state-of-the-art performance across many common continual learning benchmarks.

The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

TLDR
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model, and describes new, efficient procedures that can extract unique, secret sequences, such as credit card numbers.

Measuring Catastrophic Forgetting in Neural Networks

TLDR
New metrics and benchmarks for directly comparing five different mechanisms designed to mitigate catastrophic forgetting in neural networks: regularization, ensembling, rehearsal, dual-memory, and sparse-coding are introduced.

Catastrophic forgetting in connectionist networks

  • R. French
  • Computer Science
    Trends in Cognitive Sciences
  • 1999

Mixed-Privacy Forgetting in Deep Networks

We show that the influence of a subset of the training samples can be removed – or "forgotten" – from the weights of a network trained on large-scale image classification tasks, and we provide strong

Detecting Unintended Memorization in Language-Model-Fused ASR

TLDR
This work designs a framework for detecting memorization of random textual sequences (which the authors call canaries) in the LM training data when one has only black-box (query) access to LM-fused speech recognizer, as opposed to direct access to the LM.

Overcoming catastrophic forgetting in neural networks

TLDR
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.

Towards Making Systems Forget with Machine Unlearning

TLDR
This paper presents a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form, and applies to all stages of machine learning, including feature selection and modeling.
...