Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures

@inproceedings{Fredrikson2015ModelIA,
  title={Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures},
  author={Matt Fredrikson and S. Jha and T. Ristenpart},
  booktitle={CCS '15},
  year={2015}
}
  • Matt Fredrikson, S. Jha, T. Ristenpart
  • Published in CCS '15 2015
  • Computer Science
  • Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by Fredrikson et al., adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is… CONTINUE READING
    Membership Inference Attacks Against Machine Learning Models
    • 722
    • Highly Influenced
    • PDF
    Stealing Machine Learning Models via Prediction APIs
    • 563
    • PDF
    SoK: Security and Privacy in Machine Learning
    • 253
    • Highly Influenced
    • PDF
    Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
    • 133
    Exploiting Unintended Feature Leakage in Collaborative Learning
    • 171
    • Highly Influenced
    • PDF
    Deep Learning with Differential Privacy
    • 1,180
    • PDF
    Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
    • 598
    • Highly Influenced
    • PDF
    Towards the Science of Security and Privacy in Machine Learning
    • 255
    • Highly Influenced
    • PDF

    References

    Publications referenced by this paper.
    The OpenCV library
    • 2000