Distributed Coordinate Descent Method for Learning with Big Data


In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for solving loss minimization problems with big data. We initially partition the coordinates (features) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bounds on the number of iterations sufficient to approximately solve the problem with high probability, and show how it depends on the data and on the partitioning. We perform numerical experiments with a LASSO instance described by a 3TB matrix.

Extracted Key Phrases

13 Figures and Tables

Citations per Year

99 Citations

Semantic Scholar estimates that this publication has 99 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Richtrik2016DistributedCD, title={Distributed Coordinate Descent Method for Learning with Big Data}, author={Peter Richt{\'a}rik and Martin Tak{\'a}c}, journal={Journal of Machine Learning Research}, year={2016}, volume={17}, pages={75:1-75:25} }