• Corpus ID: 239616179

Guess what? You can boost Federated Learning for free

@article{Dhasade2021GuessWY,
  title={Guess what? You can boost Federated Learning for free},
  author={Akash Balasaheb Dhasade and Anne-Marie Kermarrec and Rafael Pires},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.11486}
}
Federated learning (FL) exploits the computation power of edge devices, typically mobile phones, while addressing privacy by letting data stay where it is produced. FL has been used by major service providers to improve item recommendations, virtual keyboards and text auto-completion services. While appealing, FL performance is hampered by multiple factors: (i) differing capabilities of participating clients (e.g., computing power, memory and network connectivity); (ii) strict training… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 25 REFERENCES
Characterizing Impacts of Heterogeneity in Federated Learning upon Large-Scale Smartphone Data
TLDR
The first empirical study to characterize the impacts of heterogeneity in Federated learning and build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration, which suggests that FL algorithm designers consider necessary heterogeneity during the evaluation.
On the Impact of Device and Behavioral Heterogeneity in Federated Learning
TLDR
An extensive empirical study spanning close to 1.5K unique configurations on five popular FL benchmarks shows that these sources of heterogeneity have a major impact on both model performance and fairness, thus sheds light on the importance of considering heterogeneity in FL system design.
APPLIED FEDERATED LEARNING: IMPROVING GOOGLE KEYBOARD QUERY SUGGESTIONS
TLDR
This paper uses federated learning in a commercial, global-scale setting to train, evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Practical Secure Aggregation for Privacy-Preserving Machine Learning
TLDR
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
LEAF: A Benchmark for Federated Settings
TLDR
LEAF is proposed, a modular benchmarking framework for learning in federated settings that includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.
A Field Guide to Federated Optimization
TLDR
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus on conducting effective simulations to infer real-world performance.
Federated Learning: Challenges, Methods, and Future Directions
TLDR
The unique characteristics and challenges of federated learning are discussed, a broad overview of current approaches are provided, and several directions of future work that are relevant to a wide range of research communities are outlined.
Federated Optimization in Heterogeneous Networks
TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.
Privacy-Preserving Deep Learning via Additively Homomorphic Encryption
TLDR
This work revisits the previous work by Shokri and Shmatikov (ACM CCS 2015) and builds an enhanced system with the following properties: no information is leaked to the server and accuracy is kept intact, compared with that of the ordinary deep learning system also over the combined dataset.
...
1
2
3
...