Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations
@inproceedings{Resheff2018PrivacyAF, title={Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations}, author={Yehezkel S. Resheff and Yanai Elazar and Shimon Shahar and Oren Sar Shalom}, booktitle={International Conference on Pattern Recognition Applications and Methods}, year={2018} }
Latent factor models for recommender systems represent users and items as low dimensional vectors. Privacy risks of such systems have previously been studied mostly in the context of recovery of personal information in the form of usage records from the training data. However, the user representations themselves may be used together with external data to recover private user information such as gender and age. In this paper we show that user vectors calculated by a common recommender system can…
11 Citations
PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation
- Computer ScienceFINDINGS
- 2020
The key idea is to simulate the attacks during the training for protecting unseen users’ privacy in the future, modeled by an adversarial game, so that the transfer learning model becomes robust to attacks.
A Survey on Adversarial Recommender Systems
- Computer ScienceACM Comput. Surv.
- 2022
The goal of this survey is to present recent advances on adversarial machine learning (AML) for the security of RS and to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions.
Federated User Representation Learning
- Computer ScienceArXiv
- 2019
This work proposes Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and resource-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting, and demonstrates that FURL can learn collaboratively through the shared parameters while preserving user privacy.
Collaborative Image Understanding
- Computer ScienceCIKM
- 2022
This work proposes a multitask learning framework, where the auxiliary task is to reconstruct collaborative latent item representations, which helps to significantly improve the performance of the main task of image classification by up to 9.1%.
Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning
- Computer SciencePKDD/ECML Workshops
- 2019
This work assumes the existence of a fair domain expert capable of generating an extension to the labeled dataset - a small set of example pairs, each having a different value on a subset of protected variables, but judged to warrant a similar model response.
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
- Computer ScienceACL
- 2020
This work presents Iterative Null-space Projection (INLP), a novel method for removing information from neural representations based on repeated training of linear classifiers that predict a certain property the authors aim to remove, followed by projection of the representations on their null-space.
Contrastive Learning for Fair Representations
- Computer ScienceArXiv
- 2021
This work proposes a method for mitigating bias in classifier training by corporating contrastive learning, in which in- 009 stances sharing the same class label are encour- 010 aged to have similar representations, while in- 011 stances sharing a protected attribute are forced further apart.
A Comprehensive Survey on Trustworthy Recommender Systems
- EducationArXiv
- 2022
WENQI FAN, The Hong Kong Polytechnic University, Hong Kong XIANGYU ZHAO∗, City University of Hong Kong, Hong Kong XIAO CHEN, The Hong Kong Polytechnic University, Hong Kong JINGRAN SU, The Hong Kong…
A Survey on Trustworthy Recommender Systems
- Computer ScienceArXiv
- 2022
This survey will introduce techniques related to trustworthy and responsible recommendation, including but not limited to explainable recommendation, fairness in recommendation, privacy-aware recommendation, robustness in recommendation; as well as the relationship between these different perspectives in terms of trustworthy andresponsible recommendation.
Fairness in Recommendation: A Survey
- Computer ScienceArXiv
- 2022
This survey focuses on the foundations for fairness in recommendation literature with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation.
References
SHOWING 1-10 OF 26 REFERENCES
A differential privacy framework for matrix factorization recommender systems
- Computer ScienceUser Modeling and User-Adapted Interaction
- 2016
It is shown that, of all the algorithms considered, input perturbation results in the best recommendation accuracy, while guaranteeing a solid level of privacy protection against attacks that aim to gain knowledge about either specific user ratings or even the existence of these ratings.
Differentially private recommender systems: building privacy into the net
- Computer ScienceKDD
- 2009
This work considers the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users, and finds that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy.
Privacy-Preserving Personalized Recommendation: An Instance-Based Approach via Differential Privacy
- Computer Science2014 IEEE International Conference on Data Mining
- 2014
The first lightweight and provably private solution for personalized recommendation, under untrusted server settings, in this novel setting, users' private data is obfuscated before leaving their private devices, giving users greater control on their data and service providers less responsibility on privacy protections.
Privacy-preserving matrix factorization
- Computer ScienceCCS
- 2013
This work shows that a recommender can profile items without ever learning the ratings users provide, or even which items they have rated, by designing a system that performs matrix factorization, a popular method used in a variety of modern recommendation systems, through a cryptographic technique known as garbled circuits.
Applying Differential Privacy to Matrix Factorization
- Computer ScienceRecSys
- 2015
This paper proposes and study several approaches for applying differential privacy to matrix factorization, and evaluates the privacy-accuracy trade-offs offered by each approach, and shows that input perturbation yields the best recommendation accuracy, while guaranteeing a solid level of privacy protection.
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
- Computer ScienceArXiv
- 2017
An adversarial training procedure is used to remove information about the sensitive attribute from the latent representation learned by a neural network, and the data distribution empirically drives the adversary's notion of fairness.
Learning Fair Representations
- Computer ScienceICML
- 2013
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the…
BPR: Bayesian Personalized Ranking from Implicit Feedback
- Computer ScienceUAI
- 2009
This paper presents a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem and provides a generic learning algorithm for optimizing models with respect to B PR-Opt.
Factorization meets the neighborhood: a multifaceted collaborative filtering model
- Computer ScienceKDD
- 2008
The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model and a new evaluation metric is suggested, which highlights the differences among methods, based on their performance at a top-K recommendation task.
BlurMe: inferring and obfuscating user gender based on ratings
- Computer ScienceRecSys '12
- 2012
This work shows that a recommender system can infer the gender of a user with high accuracy, based solely on the ratings provided by users (without additional metadata), and a relatively small number of users who share their demographics.