Corpus ID: 219559068

On the Effectiveness of Regularization Against Membership Inference Attacks

@article{Kaya2020OnTE,
  title={On the Effectiveness of Regularization Against Membership Inference Attacks},
  author={Yigitcan Kaya and Sanghyun Hong and T. Dumitras},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.05336}
}
Deep learning models often raise privacy concerns as they leak information about their training data. This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA). Prior work has conjectured that regularization techniques, which combat overfitting, may also mitigate the leakage. While many regularization mechanisms exist, their effectiveness against MIAs has not been studied systematically, and the resulting privacy… Expand
Using Rényi-divergence and Arimoto-Rényi Information to Quantify Membership Information Leakage
  • F. Farokhi
  • Computer Science
  • 2021 55th Annual Conference on Information Sciences and Systems (CISS)
  • 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks and Defenses in Classification Models

References

SHOWING 1-10 OF 35 REFERENCES
Membership Inference Attack against Differentially Private Deep Learning Model
The Unintended Consequences of Overfitting: Training Data Inference Attacks
Evaluating Differentially Private Machine Learning in Practice
Membership Inference Attacks Against Machine Learning Models
Towards Deep Learning Models Resistant to Adversarial Attacks
The Space of Transferable Adversarial Examples
Deep Learning with Differential Privacy
...
1
2
3
4
...