Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections.Expand

Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized.Expand

We present an intriguing discovery related to Random Fourier Features: replacing multiplication by a random Gaussian matrix with a properly scaled random orthogonal matrix significantly decreases kernel approximation error.Expand

We propose a new Stochastic Controlled Averaging algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the `client-drift' in its local updates.Expand

This paper presents a new Stochastic Controlled Averaging algorithm (SCAFFOLD) which uses control variates to reduce the drift between different clients.Expand

We study communication efficient algorithms for distributed mean estimation using a stochastic rounding approach and a structured random rotation.Expand

We propose a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions.Expand

We show that a single, simple, plug-in estimator—profile maximum likelihood (PML)– is sample competitive for all symmetric properties, and in particular is asymptotically sampleoptimal for all the above properties.Expand

We derive the first sample-efficient polynomial-time estimator for high-dimensional spherical Gaussian mixtures in d-dimensions using spectral reasoning.Expand