Corpus ID: 220831008

Flower: A Friendly Federated Learning Research Framework

@article{Beutel2020FlowerAF,
  title={Flower: A Friendly Federated Learning Research Framework},
  author={Daniel J. Beutel and Taner Topal and Akhil Mathur and Xinchi Qiu and Titouan Parcollet and N. Lane},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.14390}
}
Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model, while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store the data in the cloud. However, FL is difficult to implement and deploy in practice, considering the heterogeneity in mobile devices, e.g., different programming languages, frameworks, and hardware accelerators. Although there are a few… Expand
On-device Federated Learning with Flower
TLDR
This paper presents an exploration of on-device FL on various smartphones and embedded devices using the Flower framework and evaluates the system costs and discusses how this quantification could be used to design more efficient FL algorithms. Expand
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
TLDR
This work introduces Ordered Dropout, a mechanism that achieves an ordered, nested representation of knowledge in Neural Networks and enables the extraction of lower footprint submodels without the need of retraining, in the realm of FL in a framework called FjORD. Expand
FedNLP: A Research Platform for Federated Learning in Natural Language Processing
TLDR
The preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets — opening intriguing and exciting future research directions aimed at developing FL methods suited to NLP tasks. Expand
Towards General-purpose Infrastructure for Protecting Scientific Data Under Study
The scientific method presents a key challenge to privacy because it requires many 1 samples to support a claim. When samples are commercially valuable or privacy2 sensitive enough, their owners haveExpand
Syft 0.5: A Platform for Universally Deployable Structured Transparency
TLDR
Syft is presented, a general-purpose framework that combines a core group of privacy-enhancing technologies that facilitate a universal set of structured transparency systems and evaluates the proposed flow with respect to its provision of the core structural transparency principles. Expand
End-to-End Speech Recognition from Federated Acoustic Models
TLDR
This paper presents the first empirical study on attention-based sequence-to-sequence Endto-End ASR model with three aggregation weighting strategies – standard FedAvg, loss-based aggregation and a novel word error rate (WER)-based aggregation, compared in two realistic FL scenarios. Expand
A Survey on Federated Learning and its Applications for Accelerating Industrial Internet of Things
TLDR
A FL-transformed manufacturing paradigm is presented, and future research directions of FL are given and possible immediate applications in Industry 4.0 domain are discussed. Expand
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges. Expand
FedBN: Federated Learning on Non-IID Features via Local Batch Normalization
TLDR
This work proposes an effective method that uses local batch normalization to alleviate the feature shift before averaging models, called FedBN, which outperforms both classical FedAvg, as well as the state-of-the-art for non-iid data (FedProx) on the authors' extensive experiments. Expand
FedLab: A Flexible Federated Learning Framework
TLDR
The designed and developed FedLab, a flexible and modular FL framework based on PyTorch that provides functional interfaces and a series of baseline implementation are available, making researchers quickly implement ideas. Expand
...
1
2
...

References

SHOWING 1-10 OF 41 REFERENCES
LEAF: A Benchmark for Federated Settings
TLDR
LEAF is proposed, a modular benchmarking framework for learning in federated settings that includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments. Expand
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets. Expand
TensorFlow: A system for large-scale machine learning
TLDR
The TensorFlow dataflow model is described and the compelling performance that Tensor Flow achieves for several real-world applications is demonstrated. Expand
Federated Optimization in Heterogeneous Networks
TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work. Expand
Towards Federated Learning at Scale: System Design
TLDR
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions. Expand
Occlumency: Privacy-preserving Remote Deep-learning Inference Using SGX
TLDR
This paper designed a suite of novel techniques to accelerate DL inference inside the enclave with a limited memory size and implemented Occlumency based on Caffe, a novel cloud-driven solution designed to protect user privacy without compromising the benefit of using powerful cloud resources. Expand
Horovod: fast and easy distributed deep learning in TensorFlow
TLDR
Horovod is an open source library that improves on both obstructions to scaling: it employs efficient inter-GPU communication via ring reduction and requires only a few lines of modification to user code, enabling faster, easier distributed training in TensorFlow. Expand
Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes
TLDR
This work builds a highly scalable deep learning training system for dense GPU clusters with three main contributions: a mixed-precision training method that significantly improves the training throughput of a single GPU without losing accuracy, an optimization approach for extremely large mini-batch size that can train CNN models on the ImageNet dataset without lost accuracy, and highly optimized all-reduce algorithms. Expand
Analyzing Federated Learning through an Adversarial Lens
TLDR
This work explores the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence. Expand
DeepEye: Resource Efficient Local Execution of Multiple Deep Vision Models using Wearable Commodity Hardware
TLDR
A novel inference software pipeline that targets the local execution of multiple deep vision models (specifically, CNNs) by interleaving the execution of computation-heavy convolutional layers with the loading of memory-heavy fully-connected layers. Expand
...
1
2
3
4
5
...