• Corpus ID: 246240536

Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning

@article{Fang2022CommunicationEfficientSZ,
  title={Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning},
  author={Wenzhi Fang and Ziyi Yu and Yuning Jiang and Yuanming Shi and Colin Neil Jones and Yong Zhou},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.09531}
}
Federated learning (FL), as an emerging edge artificial intelligence paradigm, enables many edge devices to collaboratively train a global model without sharing their private data. To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order to second-order methods. However, these algorithms cannot be applied in scenarios where the gradient information is not available, e.g., federated blackbox attack and federated hyperparameter tuning. To address… 
Interference Management for Over-the-Air Federated Learning in Multi-Cell Wireless Networks
TLDR
This paper investigates FL over a multi-cell wireless network, where each cell performs a different FL task and over-the-air computation (AirComp) is adopted to enable fast uplink gradient aggregation, and proposes an optimization problem to minimize the sum of error-induced gaps in all cells.

References

SHOWING 1-10 OF 44 REFERENCES
FedPD: A Federated Learning Framework With Adaptivity to Non-IID Data
TLDR
This paper describes the behavior of the FedAvg algorithm, and shows that without strong and unrealistic assumptions on the problem structure, it can behave erratically, and proposes a new algorithm design strategy from the primal-dual optimization perspective that achieves the best possible optimization and communication complexity.
Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified Communication-Learning Design Approach
TLDR
A learning analysis framework is developed to quantitatively characterize the impact of device selection and model aggregation error on the convergence of over-the-air FL, and a unified communication-learning optimization problem is formulated to jointly optimize device selection, over- the-air transceiver design, and RIS configuration.
Federated Learning via Over-the-Air Computation
TLDR
A novel over-the-air computation based approach for fast global model aggregation via exploring the superposition property of a wireless multiple-access channel and providing a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect the fixed-rank constraint in the procedure of device selection.
Communication-Efficient Decentralized Zeroth-order Method on Heterogeneous Data
  • Zan Li, Li Chen
  • Computer Science
    2021 13th International Conference on Wireless Communications and Signal Processing (WCSP)
  • 2021
TLDR
This paper considers a local zeroth-order algorithm based on biased stochastic zerOTH-order update in decentralized federated learning setting and proves concise convergence rates on strongly convex problems and shows that it matches the rate of decentralized local stochastics gradient descent (local SGD) up to a factor proportional to the dimension of the parameter.
Federated Learning via Intelligent Reflecting Surface
TLDR
This paper proposes to leverage intelligent reflecting surface (IRS) to achieve fast yet reliable model aggregation for AirComp-based FL, and proposes an alternating optimization framework, supported by the difference-of-convex programming for low-rank optimization, to efficiently design the aggregation beams at the BS and phase shifts at the IRS.
Optimized Power Control Design for Over-the-Air Federated Edge Learning
TLDR
This paper investigates the transmission power control to combat against aggregation errors in Air-FEEL and proposes a new power control design aiming at directly maximizing the convergence speed, using the Lagrangian duality method.
Over-the-Air Federated Learning From Heterogeneous Data
TLDR
A Convergent OTA FL (COTAF) algorithm is developed which enhances the common local stochastic gradient descent (SGD) FL algorithm, introducing precoding at the users and scaling at the server, which gradually mitigates the effect of noise and achieves a convergence rate similar to that achievable over error-free channels.
On the Convergence of FedAvg on Non-IID Data
TLDR
This paper analyzes the convergence of Federated Averaging on non-iid data and establishes a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs.
A Novel Framework for the Analysis and Design of Heterogeneous Federated Learning
TLDR
This paper provides a general framework to analyze the convergence of federated optimization algorithms with heterogeneous local training progress at clients and proposes FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
FedSplit: An algorithmic framework for fast federated optimization
TLDR
FedSplit is introduced, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure and theory shows that these methods are provably robust to inexact computation of intermediate local quantities.
...
...