Membership Inference Attacks Against Recommender Systems

@article{Zhang2021MembershipIA,
  title={Membership Inference Attacks Against Recommender Systems},
  author={Minxing Zhang and Zhaochun Ren and Zihan Wang and Pengjie Ren and Zhumin Chen and Pengfei Hu and Yang Zhang},
  journal={Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},
  year={2021}
}
  • Minxing Zhang, Zhaochun Ren, +4 authors Yang Zhang
  • Published 16 September 2021
  • Computer Science
  • Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
Recently, recommender systems have achieved promising performances and become one of the most widely used web applications. However, recommender systems are often trained on highly sensitive user data, thus potential data leakage from recommender systems may lead to severe privacy problems. In this paper, we make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference. In contrast with traditional membership inference against machine… Expand
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
TLDR
The extensive experimental evaluation conducted over five model architectures and four datasets shows that the complexity of the training dataset plays an important role with respect to the attack’s performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated. Expand
Property Inference Attacks Against GANs
  • Junhao Zhou, Yufei Chen, Chao Shen, Yang Zhang
  • Computer Science, Mathematics
  • ArXiv
  • 2021
TLDR
This paper proposes the first set of training dataset property inference attacks against GANs and proposes a general attack pipeline that can be tailored to two attack scenarios, including the full black-box setting and partial black- box setting. Expand

References

SHOWING 1-10 OF 63 REFERENCES
Membership Inference Attacks Against Machine Learning Models
TLDR
This work quantitatively investigates how machine learning models leak information about the individual data records on which they were trained and empirically evaluates the inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Expand
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
TLDR
This most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains and proposes the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model. Expand
Machine Learning with Membership Privacy using Adversarial Regularization
TLDR
It is shown that the min-max strategy can mitigate the risks of membership inference attacks (near random guess), and can achieve this with a negligible drop in the model's prediction accuracy (less than 4%). Expand
Membership Privacy for Machine Learning Models Through Knowledge Transfer
TLDR
This work proposes a new defense, called distillation for membership privacy (DMP), against MIAs that preserves the utility of the resulting models significantly better than prior defenses, and provides a novel criterion to tune the data used for knowledge transfer in order to amplify the membership privacy of DMP. Expand
SVD-based collaborative filtering with privacy
TLDR
This paper discusses SVD-based CF with privacy, and proposes a randomized perturbation-based scheme to protect users' privacy while still providing recommendations with decent accuracy. Expand
walk2friends: Inferring Social Links from Mobility Profiles
TLDR
A novel social relation inference attack that relies on an advanced feature learning technique to automatically summarize users' mobility features, able to predict any two individuals' social relation, and it does not require the adversary to have any prior knowledge on existing social relations. Expand
Exploiting Unintended Feature Leakage in Collaborative Learning
TLDR
This work shows that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data and develops passive and active inference attacks to exploit this leakage. Expand
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
TLDR
This work proposes MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks and is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attack. Expand
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
TLDR
The effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks is examined. Expand
Item-based collaborative filtering recommendation algorithms
TLDR
This paper analyzes item-based collaborative ltering techniques and suggests that item- based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms. Expand
...
1
2
3
4
5
...