Adding vs. Averaging in Distributed Primal-Dual Optimization

Abstract

Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck. It is difficult to reduce this bottleneck while still efficiently and accurately aggregating partial work from different machines. In this paper, we present a novel generalization of the recent communication-efficient primal-dual framework (COCOA) for distributed optimization. Our framework, COCOA+, allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes only allow conservative averaging. We give stronger (primal-dual) convergence rate guarantees for both COCOA as well as our new variants, and generalize the theory for both methods to cover non-smooth convex loss functions. We provide an extensive experimental comparison that shows the markedly improved performance of COCOA+ on several real-world distributed datasets, especially when scaling up the number of machines. Proceedings of the 32 International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s).

Extracted Key Phrases

5 Figures and Tables

01020302014201520162017
Citations per Year

62 Citations

Semantic Scholar estimates that this publication has 62 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Ma2015AddingVA, title={Adding vs. Averaging in Distributed Primal-Dual Optimization}, author={Chenxin Ma and Virginia Smith and Martin Jaggi and Michael I. Jordan and Peter Richt{\'a}rik and Martin Tak{\'a}c}, booktitle={ICML}, year={2015} }