Computation Scheduling for Distributed Machine Learning with Straggling Workers

  title={Computation Scheduling for Distributed Machine Learning with Straggling Workers},
  author={Mohammad Mohammadi Amiri and Deniz Gunduz},
  journal={Sport Psychologist},
  • M. Amiri, D. Gunduz
  • Published 23 October 2018
  • Computer Science, Mathematics, Psychology
  • Sport Psychologist
We study scheduling of computation tasks across n workers in a large scale distributed learning problem with the help of a master. Computation and communication delays are assumed to be random, and redundant computations are assigned to workers in order to tolerate stragglers. We consider sequential computation of tasks assigned to a worker, while the result of each computation is sent to the master right after its completion. Each computation round, which can model an iteration of the… Expand
A framework for scheduling IoT application jobs on fog computing infrastructure based on QoS parameters
Purpose The purpose of this study is to verify that if applications categories are segmented and resources are allocated based on their specific category, how effective scheduling can be done?. Expand
Slow and Stale Gradients Can Win the Race
This work presents a novel theoretical characterization of the speed-up offered by asynchronous SGD methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). Expand
Distributed regularized stochastic configuration networks via the elastic net
The experiment results show that the proposed distributed regularized stochastic configuration network has relative advantages in terms of accuracy and stability compared with the distributed random vector functional link network. Expand
Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air
This work introduces a novel analog scheme, called A-DSGD, which exploits the additive nature of the wireless MAC for over-the-air gradient computation, and provides convergence analysis for this approach. Expand