• Publications
  • Influence
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. Expand
Advances and Open Problems in Federated Learning
TLDR
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized. Expand
Orthogonal Random Features
TLDR
We present an intriguing discovery related to Random Fourier Features: replacing multiplication by a random Gaussian matrix with a properly scaled random orthogonal matrix significantly decreases kernel approximation error. Expand
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
TLDR
We propose a new Stochastic Controlled Averaging algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the `client-drift' in its local updates. Expand
SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning
TLDR
This paper presents a new Stochastic Controlled Averaging algorithm (SCAFFOLD) which uses control variates to reduce the drift between different clients. Expand
cpSGD: Communication-efficient and differentially-private distributed SGD
TLDR
Distributed stochastic gradient descent is an important subroutine in distributed learning. Expand
Distributed Mean Estimation with Limited Communication
TLDR
We study communication efficient algorithms for distributed mean estimation using a stochastic rounding approach and a structured random rotation. Expand
Agnostic Federated Learning
TLDR
We propose a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions. Expand
A Unified Maximum Likelihood Approach for Estimating Symmetric Properties of Discrete Distributions
TLDR
We show that a single, simple, plug-in estimator—profile maximum likelihood (PML)– is sample competitive for all symmetric properties, and in particular is asymptotically sampleoptimal for all the above properties. Expand
Near-Optimal-Sample Estimators for Spherical Gaussian Mixtures
TLDR
We derive the first sample-efficient polynomial-time estimator for high-dimensional spherical Gaussian mixtures in d-dimensions using spectral reasoning. Expand
...
1
2
3
4
5
...