Corpus ID: 237490883

Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News Recommendation

@inproceedings{Yi2021EfficientFedRecEF,
  title={Efficient-FedRec: Efficient Federated Learning Framework for Privacy-Preserving News Recommendation},
  author={Jingwei Yi and Fangzhao Wu and Chuhan Wu and Ruixuan Liu and Guangzhong Sun and Xing Xie},
  booktitle={EMNLP},
  year={2021}
}
  • Jingwei Yi, Fangzhao Wu, +3 authors Xing Xie
  • Published in EMNLP 12 September 2021
  • Computer Science
News recommendation is critical for personalized news access. Most existing news recommendation methods rely on centralized storage of users’ historical news click behavior data, which may lead to privacy concerns and hazards. Federated Learning is a privacy-preserving framework for multiple clients to collaboratively train models without sharing their private data. However, the computation and communication cost of directly learning many existing news recommendation models in a federated way… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 36 REFERENCES
Federated Collaborative Filtering for Privacy-Preserving Personalized Recommendation System
TLDR
Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user's privacy in a widely used recommender application while maintaining recommender performance. Expand
Adaptive Federated Learning in Resource Constrained Edge Computing Systems
TLDR
This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. Expand
Secure Federated Matrix Factorization
TLDR
A secure matrix factorization framework under the federated learning setting, called FedMF, is proposed where the model can be learned when each user only uploads the gradient information to the server, and it is proved that it could still leak users’ raw data. Expand
Practical Secure Aggregation for Privacy-Preserving Machine Learning
TLDR
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. Expand
Neural News Recommendation with Attentive Multi-View Learning
TLDR
A neural news recommendation approach which can learn informative representations of users and news by exploiting different kinds of news information and can effectively improve the performance of news recommendation is proposed. Expand
Empowering News Recommendation with Pre-trained Language Models
Personalized news recommendation is an essential technique for online news services. News articles usually contain rich textual content, and accurate news modeling is important for personalized newsExpand
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets. Expand
Neural News Recommendation with Long- and Short-term User Representations
TLDR
A neural news recommendation approach which can learn both long- and short-term user representations and which can effectively improve the performance of neuralNews recommendation. Expand
Privacy Enhanced Matrix Factorization for Recommendation with Local Differential Privacy
TLDR
This paper develops novel matrix factorization algorithms under local differential privacy (LDP) and introduces a factor that stabilizes the perturbed gradients and evaluates recommendation accuracy of the proposed recommender system. Expand
Exploiting Unintended Feature Leakage in Collaborative Learning
TLDR
This work shows that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data and develops passive and active inference attacks to exploit this leakage. Expand
...
1
2
3
4
...