• Publications
  • Influence
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Practical Secure Aggregation for Privacy-Preserving Machine Learning
TLDR
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
Towards Federated Learning at Scale: System Design
TLDR
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions.
Church: a language for generative models
TLDR
This work introduces Church, a universal language for describing stochastic generative processes, based on the Lisp model of lambda calculus, containing a pure Lisp as its deterministic subset.
Discrete Distribution Estimation under Local Privacy
TLDR
New mechanisms are presented, including hashed K-ary Randomized Response (KRR), that empirically meet or exceed the utility of existing mechanisms at all privacy levels and demonstrate the order-optimality of KRR and the existing RAPPOR mechanism at different privacy regimes.
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead
TLDR
The first constructions for secure aggregation that achieve polylogarithmic communication and computation per client are presented and an application of secure aggregation to the task of secure shuffling is shown which enables the first cryptographically secure instantiation of the shuffle model of differential privacy.
Practical Secure Aggregation for Federated Learning on User-Held Data
TLDR
This work considers training a deep neural network in the Federated Learning model, using distributed stochastic gradient descent across user-held training data on mobile devices, wherein Secure Aggregation protects each user's model gradient.
Federated Learning with Autotuned Communication-Efficient Secure Aggregation
TLDR
A recipe for auto-tuning communication-efficient secure aggregation is developed, based on specific properties of random rotation and secure aggregation – namely, the predictable distribution of vector entries post-rotation and the modular wrapping inherent in secure aggregation.
Composable probabilistic inference with BLAISE
TLDR
This thesis presents BLAISE, a novel framework for composable probabilistic modeling and inference, designed to address limitations in the ability to programmatically manipulate models and to effectively implement inference, and describes each of the components of the BLAise modeling framework.
Portable Reputations with EgoSphere
TLDR
The design and proof-of-concept implementation of EgoSphere is outlined, a system for portable Internet reputations, targeted specifically at providing portable reputations for bulletin-board style internet services.
...
...