• Corpus ID: 239024723

FedHe: Heterogeneous Models and Communication-Efficient Federated Learning

@article{Hin2021FedHeHM,
  title={FedHe: Heterogeneous Models and Communication-Efficient Federated Learning},
  author={Chan Yun Hin and Edith Ngai},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.09910}
}
Federated learning (FL) is able to manage edge devices to cooperatively train a model while maintaining the training data local and private. One common assumption in FL is that all edge devices share the same machine learning model in training, for example, identical neural network architecture. However, the computation and store capability of different devices may not be the same. Moreover, reducing communication overheads can improve the training efficiency though it is still a challenging… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 22 REFERENCES
FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
TLDR
FedAT synergistically combines synchronous intra-tier training and asynchronous cross-tierTraining through tiering, which minimizes the straggler effect with improved convergence speed and test accuracy and compresses the uplink and downlink communications using an efficient, polyline-encoding-based compression algorithm.
TiFL: A Tier-based Federated Learning System
TLDR
This work proposes TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.
Federated Multi-Task Learning
TLDR
This work shows that multi-task learning is naturally suited to handle the statistical challenges of this setting, and proposes a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues.
Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data
TLDR
Sparse ternary compression (STC) is proposed, a new compression framework that is specifically designed to meet the requirements of the federated learning environment and advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments.
Federated Optimization in Heterogeneous Networks
TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.
FedH2L: Federated Learning with Model and Statistical Heterogeneity
TLDR
FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner, which makes it extremely bandwidth efficient, model agnostic, and crucially produces models capable of performing well on the whole data distribution when learning from heterogeneous silos.
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
FedMD: Heterogenous Federated Learning via Model Distillation
TLDR
This work uses transfer learning and knowledge distillation to develop a universal framework that enables federated learning when each agent owns not only their private data, but also uniquely designed models.
Federated Knowledge Distillation
TLDR
The goal of this chapter is to provide a deep understanding of FD while demonstrating its communication efficiency and applicability to a variety of tasks and to demystify the operational principle of FD.
Asynchronous Federated Optimization
TLDR
It is proved that the proposed asynchronous federated optimization algorithm has near-linear convergence to a global optimum, for both strongly and non-strongly convex problems, as well as a restricted family of non-convex problems.
...
1
2
3
...