• Corpus ID: 244714222

Dynamic Network-Assisted D2D-Aided Coded Distributed Learning

@inproceedings{Zeulin2021DynamicND,
  title={Dynamic Network-Assisted D2D-Aided Coded Distributed Learning},
  author={Nikita Zeulin and Olga Galinina and Nageen Himayat and Sergey D. Andreev and Robert W. Heath},
  year={2021}
}
Today, various machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of a wireless network. Distributed real-time ML solutions are highly sensitive to the so-called straggler effect caused by resource heterogeneity and alleviated by various computation offloading mechanisms that seriously challenge the communication efficiency, especially in large-scale scenarios. To decrease the communication overhead, we rely on device-to-device (D2D… 

References

SHOWING 1-10 OF 47 REFERENCES
D2D-Assisted Federated Learning in Mobile Edge Computing Networks
TLDR
The results show that D2D-FedAvg lowers the communication cost relative to the typical Federated Averaging (FedAvg) in cellular networks as the number of users is increased, while keeping the same learning accuracy with FedAvg across the board.
Attention-Weighted Federated Deep Reinforcement Learning for Device-to-Device Assisted Heterogeneous Collaborative Edge Caching
TLDR
An attention-weighted federated deep reinforcement learning (AWFDRL) model that uses federated learning to improve the training efficiency of the Q-learning network by considering the limited computing and storage capacity, and incorporates an attention mechanism to optimize the aggregation weights to avoid the imbalance of local model quality is designed.
Broadband Analog Aggregation for Low-Latency Federated Edge Learning
TLDR
This work designs a low-latency multi-access scheme for edge learning based on a popular privacy-preserving framework, federated edge learning (FEEL), and derives two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization.
Coded Computing for Low-Latency Federated Learning Over Wireless Edge Networks
TLDR
This work proposes a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure.
Federated Learning in Mobile Edge Networks: A Comprehensive Survey
TLDR
In a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved, this raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale.
Federated Learning via Over-the-Air Computation
TLDR
A novel over-the-air computation based approach for fast global model aggregation via exploring the superposition property of a wireless multiple-access channel and providing a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect the fixed-rank constraint in the procedure of device selection.
Scheduling for Cellular Federated Edge Learning With Importance and Channel Awareness
TLDR
Numerical results obtained using popular models and learning datasets demonstrate that the proposed scheduling policy can achieve faster model convergence and higher learning accuracy than conventional scheduling policies that only exploit a single type of diversity.
Coded Computing for Distributed Machine Learning in Wireless Edge Network
TLDR
A coded computation framework, which utilizes statistical knowledge of resource heterogeneity to determine optimal encoding and load balancing of training data using Random Linear codes, while avoiding an explicit step for decoding gradients is proposed.
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Energy Efficient Federated Learning Over Wireless Communication Networks
TLDR
An iterative algorithm is proposed where, at every step, closed-form solutions for time allocation, bandwidth allocation, power control, computation frequency, and learning accuracy are derived and can reduce up to 59.5% energy consumption compared to the conventional FL method.
...
...