Corpus ID: 220831381

Label-Only Membership Inference Attacks

@inproceedings{ChoquetteChoo2021LabelOnlyMI,
  title={Label-Only Membership Inference Attacks},
  author={Christopher A. Choquette-Choo and Florian Tram{\`e}r and Nicholas Carlini and Nicolas Papernot},
  booktitle={ICML},
  year={2021}
}
Membership inference attacks are one of the simplest forms of privacy leakage for machine learning models: given a data point and model, determine whether the point was used to train the model. Existing membership inference attacks exploit models' abnormal confidence when queried on their training data. These attacks do not apply if the adversary only gets access to models' predicted labels, without a confidence measure. In this paper, we introduce label-only membership inference attacks… Expand
Label-Leaks: Membership Inference Attack with Label
Dataset Inference: Ownership Resolution in Machine Learning
Privacy Analysis in Language Models via Training Data Leakage Report
Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
Accuracy-Privacy Trade-off in Deep Ensembles
Property Inference From Poisoning
A Review of Confidentiality Threats Against Embedded Neural Network Models
...
1
2
3
...

References

SHOWING 1-10 OF 61 REFERENCES
Membership Inference Attacks Against Machine Learning Models
Revisiting Membership Inference Under Realistic Assumptions
Towards Demystifying Membership Inference Attacks
Machine Learning with Membership Privacy using Adversarial Regularization
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference
...
1
2
3
4
5
...