## Machine learning at the limit

- John F. Canny, Huasha Zhao, Bobby Jaros, Ye Chen, Jiangchang Mao
- 2015 IEEE International Conference on Big Data…
- 2015

1 Excerpt

- Published 2013 in ArXiv

Many large datasets exhibit power-law statistics: The web graph, social networks, text data, clickthrough data etc. Their adjacency graphs are termed natural graphs, and are known to be difficult to partition. As a consequence most distributed algorithms on these graphs are communicationintensive. Many algorithms on natural graphs involve an Allreduce: a sum or average of partitioned data which is then shared back to the cluster nodes. Examples include PageRank, spectral partitioning, and many machine learning algorithms including regression, factor (topic) models, and clustering. In this paper we describe an efficient and scalable Allreduce primitive for power-law data. We point out scaling problems with existing butterfly and round-robin networks for Sparse Allreduce, and show that a hybrid approach improves on both. Furthermore, we show that Sparse Allreduce stages should be nested instead of cascaded (as in the dense case). And that the optimum throughput Allreduce network should be a butterfly of heterogeneous degree where degree decreases with depth into the network. Finally, a simple replication scheme is introduced to deal with node failures. We present experiments showing significant improvements over existing systems such as PowerGraph and Hadoop. Keywords-Allreduce; butterfly network; fault tolerant; big data;

@article{Zhao2013SparseAE,
title={Sparse Allreduce: Efficient Scalable Communication for Power-Law Data},
author={Huasha Zhao and John F. Canny},
journal={CoRR},
year={2013},
volume={abs/1312.3020}
}