Disparate Vulnerability to Membership Inference Attacks

@article{Kulynych2019DisparateVT,
  title={Disparate Vulnerability to Membership Inference Attacks},
  author={Bogdan Kulynych and Mohammad Yaghini and Giovanni Cherubin and Michael Veale and Carmela Troncoso},
  journal={Proceedings on Privacy Enhancing Technologies},
  year={2019},
  volume={2022},
  pages={460 - 480}
}
Abstract A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion… 

Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability

  • Computer Science
  • 2022
A novel black-box MIAI attack is designed that assumes the least adversary knowledge/capabilities to date while still performing similarly to the state-of-the-art attacks, and empirically identifies possible disparity factors and discusses potential ways to mitigate disparity in MIAI attacks.

SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning

A game-based framework is presented to systematize the body of knowledge on privacy inference risks in machine learning and provide a unifying structure foritions of inference risks, and to uncover hitherto unknown relations that would have been hard to spot otherwise.

Formalizing and Estimating Distribution Inference Risks

This work proposes a formal definition of distribution inference attacks general enough to describe a broad class of attacks distinguishing between possible training distributions, and introduces a metric that quantifies observed leakage by relating it to the leakage that would occur if samples from the training distribution were provided directly to the adversary.

Bayesian Estimation of Differential Privacy

A novel Bayesian method is proposed that greatly reduces sample size, and can be used to reduce the number of trained models needed to get enough samples by up to 2 orders of magnitude, and adapt and validate a heuristic to draw more than one sample per trained model.

Membership Inference Attacks Against Semantic Segmentation Models

This work quantitatively evaluates the attacks on a number of popular model architectures across a variety of semantic segmentation tasks, demonstrating that membership inference attacks in this domain can achieve a high success rate and defending against them may result in unfavourable privacy-utility trade-offs or increased computational costs.

Per-Instance Privacy Accounting for Differentially Private Stochastic Gradient Descent

This work uses an efficient algorithm to compute per-instance privacy guarantees for individual examples when running DP-SGD, and discovers that most examples enjoy stronger privacy guarantees than the worst-case bounds.

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

It is proved that differentially private methods satisfy a “What You See Is What You Get (WYSIWYG)” generalization guarantee: whatever a model does on its train data is almost exactly what it will do at test time.

Fairness Properties of Face Recognition and Obfuscation Systems

This paper characterizes demographic disparities of face obfuscation systems and their underlying metric embedding networks and shows how this clustering behavior leads to reduced face obfuscations utility for faces in minority groups.

What You See is What You Get: Principled Deep Learning via Distributional Generalization

This work construct simple algorithms that are competitive with SOTA in several distributional-robustness applications, improve the privacy vs. disparate impact trade-off of DP-SGD, and mitigate robust overfitting in adversarial training.

Fair NLP Models with Differentially Private Text Encoders

This work proposes FEDERATE, an approach that combines ideas from ferential privacy and adversarial learning to learn private text representations which induces fairer models and empirically evaluates the trade-off between the privacy and the fairness and the accuracy of the downstream model on two challenging NLP tasks.

References

SHOWING 1-10 OF 45 REFERENCES

Differentially Private Learning Does Not Bound Membership Inference

This work challenges prior findings that suggest Differential Privacy provides a strong defense against Membership Inference Attacks and provides theoretical and experimental evidence for cases where the theoretical bounds of DP are violated by MIAs using the same attacks described in prior work.

A Pragmatic Approach to Membership Inferences on Machine Learning Models

This work revisits membership inference attacks from the perspective of a pragmatic adversary who carefully selects targets and make predictions conservatively and design a new evaluation methodology that allows to evaluate the membership privacy risk at the level of individuals and not only in aggregate.

Systematic Evaluation of Privacy Risks of Machine Learning Models

This paper proposes to benchmark membership inference privacy risks by improving existing non-neural network based inference attacks and proposing a new inference attack method based on a modification of prediction entropy, and introduces a new approach for fine-grained privacy analysis by formulating and deriving a new metric called the privacy risk score.

Membership Inference Attacks Against Machine Learning Models

This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon.

Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting

The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined.

Membership Inference Attacks and Defenses in Classification Models

This work proposes a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy, by means of a new set regularizer using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets.

On the Privacy Risks of Algorithmic Fairness

  • Hong ChangR. Shokri
  • Computer Science
    2021 IEEE European Symposium on Security and Privacy (EuroS&P)
  • 2021
It is shown that fairness comes at the cost of privacy, and this cost is not distributed equally: the information leakage of fair models increases significantly on the unprivileged subgroups, which are the ones for whom the authors need fair learning.

Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference

This work shows how a model's idiosyncratic use of features can provide evidence for membership to white-box attackers---even when the model's black-box behavior appears to generalize well---and demonstrates that this attack outperforms prior black- box methods.

Modelling and Quantifying Membership Information Leakage in Machine Learning

This work proves a direct relationship between the Kullback--Leibler membership information leakage and the probability of success for a hypothesis-testing adversary examining whether a particular data record belongs to the training dataset of a machine learning model.

Differential Privacy Has Disparate Impact on Model Accuracy

It is demonstrated that in the neural networks trained using differentially private stochastic gradient descent (DP-SGD), accuracy of DP models drops much more for the underrepresented classes and subgroups, resulting in a disparate reduction of model accuracy.