Corpus ID: 208909851

Machine Unlearning

@article{Bourtoule2019MachineU,
  title={Machine Unlearning},
  author={Lucas Bourtoule and V. Chandrasekaran and Christopher A. Choquette-Choo and Hengrui Jia and Adelin Travers and Baiwu Zhang and D. Lie and Nicolas Papernot},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.03817}
}
  • Lucas Bourtoule, V. Chandrasekaran, +5 authors Nicolas Papernot
  • Published 2019
  • Computer Science
  • ArXiv
  • Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because any model trained with said data may have memorized it, putting users at risk of a successful privacy attack exposing their information. Yet, having models unlearn is notoriously difficult. After a data point is removed from a training set, one often resorts to entirely retraining downstream models from scratch… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-3 OF 3 CITATIONS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 52 REFERENCES

    Reading Digits in Natural Images with Unsupervised Feature Learning

    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL

    Five Years of the Right to be Forgotten

    VIEW 10 EXCERPTS
    HIGHLY INFLUENTIAL

    Towards Making Systems Forget with Machine Unlearning

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    A Brief Introduction to Boosting

    VIEW 1 EXCERPT

    A theory of the learnable

    VIEW 1 EXCERPT

    Active Learning Literature Survey

    VIEW 1 EXCERPT

    Active Learning by Querying Informative and Representative Examples

    VIEW 1 EXCERPT