Broadband Analog Aggregation for Low-Latency Federated Edge Learning

@article{Zhu2020BroadbandAA,
  title={Broadband Analog Aggregation for Low-Latency Federated Edge Learning},
  author={Guangxu Zhu and Yong Wang and Kaibin Huang},
  journal={IEEE Transactions on Wireless Communications},
  year={2020},
  volume={19},
  pages={491-506}
}
To leverage rich data distributed at the network edge, a new machine-learning paradigm, called edge learning, has emerged where learning algorithms are deployed at the edge for providing intelligent services to mobile users. [...] Key Method It is proposed that the updates simultaneously transmitted by devices over broadband channels should be analog aggregated “over-the-air” by exploiting the waveform-superposition property of a multi-access channel. Such broadband analog aggregation (BAA) results in…Expand
Federated Learning via Over-the-Air Computation
TLDR
A novel over-the-air computation based approach for fast global model aggregation via exploring the superposition property of a wireless multiple-access channel and providing a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect the fixed-rank constraint in the procedure of device selection.
Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified Communication-Learning Design Approach
TLDR
A learning analysis framework is developed to quantitatively characterize the impact of device selection and model aggregation error on the convergence of over-the-air FL, and a unified communication-learning optimization problem is formulated to jointly optimize device selection, over- the-air transceiver design, and RIS configuration.
An Introduction to Communication Efficient Edge Machine Learning
TLDR
An overview of the emerging area of communication efficient edge learning is provided by introducing new design principles, discussing promising research opportunities, and providing design examples based on recent work.
One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning
TLDR
A comprehensive analysis framework for quantifying the effects of wireless channel hostilities (channel noise and fading) on the convergence rate is developed, showing that the hostilities slow down the convergence of the learning process by introducing a scaling factor and a bias term into the gradient norm.
EdgeML: Towards Network-Accelerated Federated Learning over Wireless Edge
TLDR
This paper aims to accelerate FL convergence over wireless edge by optimizing the multi-hop federated networking performance, and develops and implements FedEdge, which is the first experimental framework in the literature for FL over multihop wireless edge computing networks.
Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach
TLDR
This work considers a many-to-one wireless architecture for federated learning at the network edge, where multiple edge devices collaboratively train a model using local data, and proposes SGD-based bandlimited coordinate descent algorithms for such settings.
Blind Federated Edge Learning
TLDR
An analog ‘over-the-air’ aggregation scheme, in which the devices transmit their local updates in an uncoded fashion, and the proposed algorithm becomes deterministic despite the lack of perfect CSI when the PS has a sufficiently large number of antennas.
Edge-Native Intelligence for 6G Communications Driven by Federated Learning: A Survey of Trends and Challenges
TLDR
An overview of the state-of-the-art of FL applications in key wireless technologies that will serve as a foundation to establish a firm understanding of the topic and offer a road forward for future research directions.
Data-Aware Device Scheduling for Federated Edge Learning
TLDR
This work proposes a new scheduling scheme for non-independent and-identically-distributed (non-IID) and unbalanced datasets in FEEL and defines a general framework for the data-aware scheduling and the main axes and requirements for diversity evaluation.
Accelerating DNN Training in Wireless Federated Edge Learning Systems
TLDR
This work considers a newly-emerged framework, namely federated edge learning, to aggregate local learning updates at the network edge in lieu of users’ raw data to accelerate the training process and recommends that the proposed algorithm is applicable in more general systems.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 54 REFERENCES
Federated Learning via Over-the-Air Computation
TLDR
A novel over-the-air computation based approach for fast global model aggregation via exploring the superposition property of a wireless multiple-access channel and providing a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect the fixed-rank constraint in the procedure of device selection.
When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning
TLDR
This paper analyzes the convergence rate of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Toward an Intelligent Edge: Wireless Communication Meets Machine Learning
TLDR
A new set of design guidelines for wireless communication in edge learning, collectively called learning- driven communication is advocated, which crosses and revolutionizes two disciplines: wireless communication and machine learning.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air
TLDR
This work introduces a novel analog scheme, called A-DSGD, which exploits the additive nature of the wireless MAC for over-the-air gradient computation, and provides convergence analysis for this approach.
A Survey on Mobile Edge Computing: The Communication Perspective
TLDR
A comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management and recent standardization efforts on MEC are introduced.
Reduced-Dimension Design of MIMO Over-the-Air Computing for Data Aggregation in Clustered IoT Networks
TLDR
A multiple-input-multiple-output (MIMO) AirComp framework for an IoT network with clustered multi-antenna sensors and an AP with large receive arrays is proposed, shown to substantially reduce AirComp error compared with the existing design without considering channel structures.
Wirelessly Powered Data Aggregation for IoT via Over-the-Air Function Computation: Beamforming and Power Control
TLDR
The results reveal that the optimal energy beams point to the dominant Eigen-directions of the WPT channels, and the optimal power allocation tends to equalize the close-loop (down-link WPT and up-link AirComp) effective channels of different sensors.
Efficient Decentralized Deep Learning by Dynamic Model Averaging
TLDR
An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones.
...
1
2
3
4
5
...