Corpus ID: 219559068

On the Effectiveness of Regularization Against Membership Inference Attacks

@article{Kaya2020OnTE,
  title={On the Effectiveness of Regularization Against Membership Inference Attacks},
  author={Yigitcan Kaya and Sanghyun Hong and T. Dumitras},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.05336}
}
Deep learning models often raise privacy concerns as they leak information about their training data. This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA). Prior work has conjectured that regularization techniques, which combat overfitting, may also mitigate the leakage. While many regularization mechanisms exist, their effectiveness against MIAs has not been studied systematically, and the resulting privacy… Expand
Using Rényi-divergence and Arimoto-Rényi Information to Quantify Membership Information Leakage
  • F. Farokhi
  • Computer Science
  • 2021 55th Annual Conference on Information Sciences and Systems (CISS)
  • 2021
TLDR
An upper bound for α-divergence information leakage is established as a function of the privacy budget for differentially-private machine learning models. Expand
Membership Inference Attacks on Machine Learning: A Survey
TLDR
This paper presents the first comprehensive survey of membership inference attacks and summarizes and categorizes existing membership inference attacked and defenses and explicitly present how to implement attacks in various settings. Expand
Membership Inference Attacks and Defenses in Classification Models
TLDR
This work proposes a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy, by means of a new set regularizer using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets. Expand

References

SHOWING 1-10 OF 35 REFERENCES
Membership Inference Attack against Differentially Private Deep Learning Model
TLDR
The experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility. Expand
The Unintended Consequences of Overfitting: Training Data Inference Attacks
TLDR
This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about training data from machine learning models, either through training set membership inference or model inversion attacks. Expand
Evaluating Differentially Private Machine Learning in Practice
TLDR
There is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss which can be measured using current inference attacks. Expand
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
TLDR
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model. Expand
Membership Inference Attacks Against Machine Learning Models
TLDR
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
TLDR
A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name. Expand
The Space of Transferable Adversarial Examples
TLDR
It is found that adversarial examples span a contiguous subspace of large (~25) dimensionality, which indicates that it may be possible to design defenses against transfer-based attacks, even for models that are vulnerable to direct attacks. Expand
Deep Learning with Differential Privacy
TLDR
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. Expand
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
TLDR
This work investigates the model inversion problem in the adversarial settings, where the adversary aims at inferring information about the target model's training data and test data from the model's prediction values, and develops a solution to train a second neural network that acts as the inverse of thetarget model to perform the inversion. Expand
...
1
2
3
4
...