Practical Blind Membership Inference Attack via Differential Comparisons

@article{Hui2021PracticalBM,
  title={Practical Blind Membership Inference Attack via Differential Comparisons},
  author={Bo Hui and Yuchen Yang and Haolin Yuan and Philippe Burlina and Neil Zhenqiang Gong and Yinzhi Cao},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.01341}
}
Membership inference (MI) attacks affect user privacy by inferring whether given data samples have been used to train a target learning model, e.g., a deep neural network. There are two types of MI attacks in the literature, i.e., these with and without shadow models. The success of the former heavily depends on the quality of the shadow model, i.e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an… 

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

The experimental results show that P URIFIER helps defend membership inference attacks with high effectiveness and efficiency, outperforming previous defense methods, and also incurs negligible utility loss.

Membership Leakage in Label-Only Exposures

This paper proposes decision-based membership inference attacks and develops two types of attacks, namely transfer attack and boundary attack, which can achieve remarkable performance and outperform the previous score-based attacks in some cases.

l-Leaks: Membership Inference Attacks with Logits

The attack l-Leaks, which follows the intuition that if an established shadow model is similar enough to the target model, then the adversary can leverage the shadow model’s information to predict a target sample's membership, is presented.

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture

This work proposes a new framework to train privacy-preserving models that induce similar behavior on member and non-member inputs to mitigate membership inference attacks and shows that SELENA presents a superior trade-off between membership privacy and utility compared to the state of the art empirical privacy defenses.

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

This work proposes the first data augmentation-based membership inference attacks against ML models trained by SSL and observes that the SSL model is well generalized to the testing data but “memorizes” the training data by giving a more confident prediction regardless of its correctness.

Knowledge Cross-Distillation for Membership Privacy

This work proposes a novel defense against MIAs that uses knowledge distillation without requiring public data, and has a much better privacy-utility trade-off than those of the existing defenses that also do not use public data for the image dataset CIFAR10.

Privacy-preserving Generative Framework Against Membership Inference Attacks

This paper designs a privacy-preserving generative framework against membership inference attacks, through the information extraction and data generation capabilities of the generative model variational autoencoder (VAE) to generate synthetic data that meets the needs of differential privacy.

Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference

This paper theoretically proves by theoretically proving for an overparameterized linear regression model in the Gaussian data setting that membership inference vulnerability increases with the number of parameters, and shows that more complex, nonlinear models exhibit the same behavior.

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

This work proposes a novel training framework based on a relaxed loss (RelaxLoss) with a more achievable learning target, which leads to narrowed generalization gap and reduced privacy leakage, and is applicable to any classification model with added benefits of easy implementation and negligible overhead.

Membership Inference Attacks and Generalization: A Causal Perspective

This work proposes the first approach to explain MI attacks and their connection to generalization based on principled causal reasoning, and offers causal graphs that quantitatively explain the observed MI attack performance achieved for 6 attack variants.