Corpus ID: 195750777

Asymptotic Network Independence in Distributed Optimization for Machine Learning

@article{Olshevsky2019AsymptoticNI,
  title={Asymptotic Network Independence in Distributed Optimization for Machine Learning},
  author={A. Olshevsky and I. Paschalidis and Shi Pu},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.12345}
}
We provide a discussion of several recent results which have overcome a key barrier in distributed optimization for machine learning. Our focus is the so-called network independence property, which is achieved whenever a distributed method executed over a network of $n$ nodes achieves comparable performance to a centralized method with the same computational power as the entire network. We explain this property through an example involving of training ML models and sketch a short mathematical… Expand
Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
Asymptotic Network Independence and Step-Size for A Distributed Subgradient Method
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
Optimal Decentralized Distributed Algorithms for Stochastic Convex Optimization

References

SHOWING 1-10 OF 39 REFERENCES
Supervised Learning Under Distributed Features
A Non-Asymptotic Analysis of Network Independence for Distributed Stochastic Gradient Descent
Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
Adaptive Penalty-Based Distributed Stochastic Convex Optimization
Stochastic Gradient Push for Distributed Deep Learning
Optimal Convergence Rates for Convex Distributed Optimization in Networks
Stochastic Subgradient Algorithms for Strongly Convex Optimization Over Distributed Networks
Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
...
1
2
3
4
...