Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations

@inproceedings{Resheff2019PrivacyAF,
  title={Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations},
  author={Yehezkel S. Resheff and Yanai Elazar and Shimon Shahar and Oren Sar Shalom},
  booktitle={ICPRAM},
  year={2019}
}
Latent factor models for recommender systems represent users and items as low dimensional vectors. Privacy risks of such systems have previously been studied mostly in the context of recovery of personal information in the form of usage records from the training data. However, the user representations themselves may be used together with external data to recover private user information such as gender and age. In this paper we show that user vectors calculated by a common recommender system can… 

Tables from this paper

PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation

The key idea is to simulate the attacks during the training for protecting unseen users’ privacy in the future, modeled by an adversarial game, so that the transfer learning model becomes robust to attacks.

A Survey on Adversarial Recommender Systems

The goal of this survey is to present recent advances on adversarial machine learning (AML) for the security of RS and to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions.

A Survey on Trustworthy Recommender Systems

This survey will introduce techniques related to trustworthy and responsible recommendation, including but not limited to explainable recommendation, fairness in recommendation, privacy-aware recommendation, robustness in recommendation; as well as the relationship between these different perspectives in terms of trustworthy andresponsible recommendation.

Federated User Representation Learning

This work proposes Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and resource-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting, and demonstrates that FURL can learn collaboratively through the shared parameters while preserving user privacy.

A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks

An exhaustive literature review of 74 articles published in major RS and ML journals and conferences is provided, to present recent advances on adversarial machine learning (AML) for the security of RS and to show another successful application of AML in generative adversarial networks (GANs) for generative applications.

Collaborative Image Understanding

This work proposes a multitask learning framework, where the auxiliary task is to reconstruct collaborative latent item representations, which helps to significantly improve the performance of the main task of image classification by up to 9.1%.

Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning

This work assumes the existence of a fair domain expert capable of generating an extension to the labeled dataset - a small set of example pairs, each having a different value on a subset of protected variables, but judged to warrant a similar model response.

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

This work presents Iterative Null-space Projection (INLP), a novel method for removing information from neural representations based on repeated training of linear classifiers that predict a certain property the authors aim to remove, followed by projection of the representations on their null-space.

Contrastive Learning for Fair Representations

This work proposes a method for mitigating bias in classifier training by corporating contrastive learning, in which in- 009 stances sharing the same class label are encour- 010 aged to have similar representations, while in- 011 stances sharing a protected attribute are forced further apart.

A Comprehensive Survey on Trustworthy Recommender Systems

WENQI FAN, The Hong Kong Polytechnic University, Hong Kong XIANGYU ZHAO∗, City University of Hong Kong, Hong Kong XIAO CHEN, The Hong Kong Polytechnic University, Hong Kong JINGRAN SU, The Hong Kong

References

SHOWING 1-10 OF 26 REFERENCES

A differential privacy framework for matrix factorization recommender systems

It is shown that, of all the algorithms considered, input perturbation results in the best recommendation accuracy, while guaranteeing a solid level of privacy protection against attacks that aim to gain knowledge about either specific user ratings or even the existence of these ratings.

Differentially private recommender systems: building privacy into the net

This work considers the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users, and finds that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy.

Privacy-Preserving Personalized Recommendation: An Instance-Based Approach via Differential Privacy

The first lightweight and provably private solution for personalized recommendation, under untrusted server settings, in this novel setting, users' private data is obfuscated before leaving their private devices, giving users greater control on their data and service providers less responsibility on privacy protections.

Privacy-preserving matrix factorization

This work shows that a recommender can profile items without ever learning the ratings users provide, or even which items they have rated, by designing a system that performs matrix factorization, a popular method used in a variety of modern recommendation systems, through a cryptographic technique known as garbled circuits.

Applying Differential Privacy to Matrix Factorization

This paper proposes and study several approaches for applying differential privacy to matrix factorization, and evaluates the privacy-accuracy trade-offs offered by each approach, and shows that input perturbation yields the best recommendation accuracy, while guaranteeing a solid level of privacy protection.

Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations

An adversarial training procedure is used to remove information about the sensitive attribute from the latent representation learned by a neural network, and the data distribution empirically drives the adversary's notion of fairness.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

BPR: Bayesian Personalized Ranking from Implicit Feedback

This paper presents a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem and provides a generic learning algorithm for optimizing models with respect to B PR-Opt.

Factorization meets the neighborhood: a multifaceted collaborative filtering model

The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model and a new evaluation metric is suggested, which highlights the differences among methods, based on their performance at a top-K recommendation task.

BlurMe: inferring and obfuscating user gender based on ratings

This work shows that a recommender system can infer the gender of a user with high accuracy, based solely on the ratings provided by users (without additional metadata), and a relatively small number of users who share their demographics.