Corpus ID: 219966510

Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks

@article{Fan2020RethinkingPP,
  title={Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks},
  author={Lixin Fan and Kam Ng and Ce Ju and Tianyu Zhang and Chunhui Liu and Chee Seng Chan and Qiang Yang},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.11601}
}
  • Lixin Fan, Kam Ng, +4 authors Qiang Yang
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks. First, we propose to quantitatively measure the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks. Second, we formulate reconstruction attacks as solving a noisy system of linear equations, and prove that attacks are guaranteed to be defeated if condition (2) is unfulfilled. Third, based on theoretical… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 21 REFERENCES

    Exploiting Unintended Feature Leakage in Collaborative Learning

    VIEW 8 EXCERPTS
    HIGHLY INFLUENTIAL

    Privacy-preserving deep learning

    • Reza Shokri, Vitaly Shmatikov
    • Computer Science
    • 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
    • 2015
    VIEW 23 EXCERPTS
    HIGHLY INFLUENTIAL

    A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Deep Learning with Differential Privacy

    VIEW 10 EXCERPTS
    HIGHLY INFLUENTIAL

    ABY3: A Mixed Protocol Framework for Machine Learning

    VIEW 2 EXCERPTS