• Corpus ID: 236912639

FedJAX: Federated learning simulation with JAX

@article{Ro2021FedJAXFL,
  title={FedJAX: Federated learning simulation with JAX},
  author={Jae Ro and Ananda Theertha Suresh and Ke Wu},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.02117}
}
Federated learning is a machine learning technique that enables training across decentralized data. Recently, federated learning has become an active area of research due to an increased focus on privacy and security. In light of this, a variety of open source federated learning libraries have been developed and released. We introduce FEDJAX, a JAX-based open source library for federated learning simulations that emphasizes ease-of-use in research. With its simple primitives for implementing… 

Figures and Tables from this paper

Breaking the centralized barrier for cross-device federated learning
TLDR
This work proposes a general algorithmic framework, MIME, which mitigates client drift and adapts an arbitrary centralized optimization algorithm such as momentum and Adam to the cross-device federated learning setting and proves that MIME is provably faster than any centralized method.
FedScale: Benchmarking Model and System Performance of Federated Learning
TLDR
FedScale is a federated learning benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research and highlight potential opportunities for heterogeneity-aware co-optimizations in FL.
Communication-Efficient Agnostic Federated Averaging
TLDR
A communication-efficient distributed algorithm called A GNOSTIC F EDERATED A VERAGING (or A GNostic F ED A VG) is proposed to minimize the domain-agnostic objective proposed in [1], which is amenable to other private mechanisms such as secure aggregation.
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning
TLDR
A new framework, FedMix, is introduced that takes into account the unique challenges brought by federated learning and enables practitioners to tap into the immense wealth of existing (potentially non-local) methods for distributed optimization.
A Field Guide to Federated Optimization
TLDR
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus on conducting effective simulations to infer real-world performance.
FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients
TLDR
Extensive empirical evaluations on image and text benchmarks show that the proposed method can achieve up to 490 × communication cost reduction with minimal drop in accuracy, and enables a desirable performance vs. communication trade-off.
ns3-fl: Simulating Federated Learning with ns-3
In recent years, there has been a spike in interest in the field of federated learning (FL). As a result, an increasing number of federated learning algorithms have been developed. Large-scale
Towards Fair Federated Recommendation Learning: Characterizing the Inter-Dependence of System and Data Heterogeneity
TLDR
A data-driven approach is taken to show the inter-dependence of data and system heterogeneity in real-world data and its impact on the overall model quality and fairness, and shows that modeling realistic system-induced data heterogeneity is essential to achieving fair federated recommendation learning.
Scaling Language Model Size in Cross-Device Federated Learning
TLDR
This work is able to train a 21M parameter Transformer that achieves the same perplexity as that of a similarly sized LSTM with \sim10\times smaller client-to-server communication cost and 11% lower perplexity than smaller LSTMs commonly studied in literature.
Improving Generalization in Federated Learning by Seeking Flat Minima
TLDR
This work investigates behavior through the lens of geometry of the loss and Hessian eigenspectrum, linking the model’s lack of generalization capacity to the sharpness of the solution, and shows that training clients locally with Sharpness-Aware Minimization or its adaptive version can sub-stantially improve generalization in Federated Learning and help bridging the gap with centralized models.
...
...

References

SHOWING 1-10 OF 51 REFERENCES
FedML: A Research Library and Benchmark for Federated Machine Learning
TLDR
FedML is introduced, an open research library and benchmark that facilitates the development of new federated learning algorithms and fair performance comparisons and can provide an efficient and reproducible means of developing and evaluating algorithms for the Federated learning research community.
Adaptive Federated Optimization
TLDR
This work proposes federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyzes their convergence in the presence of heterogeneous data for general nonconvex settings to highlight the interplay between client heterogeneity and communication efficiency.
LEAF: A Benchmark for Federated Settings
TLDR
LEAF is proposed, a modular benchmarking framework for learning in federated settings that includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.
IBM Federated Learning: an Enterprise Framework White Paper V0.1
TLDR
IBM Federated Learning enables data scientists to expand their scope from centralized to federated machine learning, minimizing the learning curve at the outset while also providing the flexibility to deploy to different compute environments and design custom fusion algorithms.
Agnostic Federated Learning
TLDR
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.
Federated Machine Learning: Concept and Applications
TLDR
This work proposes building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy.
On the Convergence of Federated Optimization in Heterogeneous Networks
TLDR
This work proposes and introduces \fedprox, which is similar in spirit to \fedavg, but more amenable to theoretical analysis, and describes the convergence of \fed Prox under a novel \textit{device similarity} assumption.
APPLIED FEDERATED LEARNING: IMPROVING GOOGLE KEYBOARD QUERY SUGGESTIONS
TLDR
This paper uses federated learning in a commercial, global-scale setting to train, evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data.
SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning
TLDR
A new Stochastic Controlled Averaging algorithm (SCAFFOLD) which uses control variates to reduce the drift between different clients and it is proved that the algorithm requires significantly fewer rounds of communication and benefits from favorable convergence guarantees.
Communication-Efficient Agnostic Federated Averaging
TLDR
A communication-efficient distributed algorithm called A GNOSTIC F EDERATED A VERAGING (or A GNostic F ED A VG) is proposed to minimize the domain-agnostic objective proposed in [1], which is amenable to other private mechanisms such as secure aggregation.
...
...