Correlation inference attacks against machine learning models
@inproceedings{Crectu2021CorrelationIA, title={Correlation inference attacks against machine learning models}, author={Ana-Maria Crectu and Florent Gu'epin and Yves-Alexandre de Montjoye}, year={2021} }
Machine learning models are often trained on sensitive and proprietary datasets. Yet what – and under which conditions – a model leaks about its dataset, is not well understood. Most previous works study the leakage of information about an individual record. Yet in many situations, global dataset information such as its underlying distribution, e.g. k -way marginals or correlations are similarly sensitive or secret. We here explore for the first time whether a model leaks information about the…
Figures and Tables from this paper
References
SHOWING 1-10 OF 35 REFERENCES
Membership Inference Attacks Against Machine Learning Models
- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon.
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
- Computer ScienceNDSS
- 2019
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
- Computer Science2018 IEEE 31st Computer Security Foundations Symposium (CSF)
- 2018
The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined.
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
- Computer ScienceUSENIX Security Symposium
- 2020
This paper investigates whether the change in the output of a black-box ML model before and after being updated can leak information of the dataset used to perform the update, namely the updating set, and proposes four attacks following an encoder-decoder formulation that allows inferring diverse information ofThe updating set.
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
- Computer ScienceUSENIX Security Symposium
- 2022
This paper devise a novel confidence score-based model inversion attribute inference attack that outperforms the state-of-the-art, and introduces a label-only model inversions attack that relies only on the model’s predicted labels but still matches the authors' con-dencescore-based attack in terms of attack effectiveness.
Property Inference Attacks Against GANs
- Computer ScienceNDSS
- 2022
This paper proposes the first set of training dataset property inference attacks against GANs and proposes a general attack pipeline that can be tailored to two attack scenarios, including the full black-box setting and partial black- box setting and a novel optimization framework to increase the attack efficacy.
Reconstructing Training Data with Informed Adversaries
- Computer Science2022 IEEE Symposium on Security and Privacy (SP)
- 2022
This work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works; it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.
Leakage of Dataset Properties in Multi-Party Machine Learning
- Computer ScienceUSENIX Security Symposium
- 2021
This work shows that secure multi-party machine learning can cause leakage of global dataset properties between the parties even when parties obtain only black-box access to the final model, and considers several models of correlation between a sensitive attribute and the rest of the data.
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
- Computer ScienceCCS
- 2015
A new class of model inversion attack is developed that exploits confidence values revealed along with predictions and is able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and recover recognizable images of people's faces given only their name.
Label-Only Membership Inference Attacks
- Computer Science, MathematicsICML
- 2021
Membership inference attacks are one of the simplest forms of privacy leakage for machine learning models: given a data point and model, determine whether the point was used to train the model.…