• Corpus ID: 238634824

Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses

  title={Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses},
  author={Andrew Lowy and Meisam Razaviyayn},
This paper studies the problem of federated learning (FL) in the absence of a trustworthy server/clients. In this setting, each client needs to ensure the privacy of its own data without relying on the server or other clients. We study local differential privacy (LDP) at the client level and provide tight upper and lower bounds that establish the minimax optimal rates (up to logarithms) for LDP convex/strongly convex federated stochastic optimization. Our rates match the optimal statistical… 

Figures from this paper


Advances and Open Problems in Federated Learning
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
FLAME: Differentially Private Federated Learning in the Shuffle Model
By leveraging the privacy amplification effect in the recently proposed shuffle model of differential privacy, this work achieves the best of two worlds, i.e., accuracy in the curator model and strong privacy without relying on any trusted party.
Federated Learning for Internet of Things: A Comprehensive Survey
This article explores the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security.
Learning with User-Level Privacy
User-level DP protects a user’s entire contribution, providing more stringent but more realistic protection against information leaks, and shows that for high-dimensional mean estimation, empirical risk minimization with smooth losses, stochastic convex optimization, and learning hypothesis class with finite metric entropy, the privacy cost decreases as O(1/ m) as users provide more samples.
Local Differential Privacy-Based Federated Learning for Internet of Things
This article proposes to integrate federated learning and local differential privacy (LDP) to facilitate the crowdsourcing applications to achieve the machine learning model, and proposes four LDP mechanisms to perturb gradients generated by vehicles.
Output Perturbation for Differentially Private Convex Optimization with Improved Population Loss Bounds, Runtimes and Applications to Private Adversarial Training
A completely general family of convex, Lipschitz loss functions is studied and the first known DP excess risk and runtime bounds for optimizing this broad class are established and the theory quantifies tradeoffs between adversarial robustness, privacy, and runtime.
Shuffle Private Stochastic Convex Optimization
This work presents interactive shuffle protocols for stochastic convex optimization, which rely on a new noninteractive protocol for summing vectors of bounded l2 norm and obtains loss guarantees for a variety of convex loss functions that significantly improve on those of the local model and sometimes match Those of the central model.
Shuffled Model of Differential Privacy in Federated Learning
For convex loss functions, it is proved that the proposed CLDP-SGD algorithm matches the known lower bounds on the centralized private ERM while using a finite number of bits per iteration for each client, i.e., effectively getting communication efficiency for “free”.
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
This paper introduces a unified convergence analysis that covers a large variety of decentralized SGD methods which so far have required different intuitions, have different applications, and which have been developed separately in various communities.
Analyzing User-Level Privacy Attack Against Federated Learning
This paper gives the first attempt to explore user-level privacy leakage by the attack from a malicious server, and proposes a framework incorporating GAN with a multi- task discriminator, called multi-task GAN – Auxiliary Identification (mGAN-AI), which simultaneously discriminates category, reality, and client identity of input samples.