DS-MLR: Exploiting Double Separability for Scaling up Distributed Multinomial Logistic Regression

Abstract

Multinomial logistic regression is a popular tool in the arsenal of machine learning algorithms, yet scaling it to datasets with very large number of data points and classes has not been trivial. This is primarily because one needs to compute the log-partition function on every data point. This makes distributing the computation hard. In this paper, we present a distributed stochastic gradient descent based optimization method (DS-MLR) for scaling up multinomial logistic regression problems to very large data. Our algorithm exploits double-separability, an attractive property we observe in the objective functions of several models in machine learning, that allows us to distribute both data and model parameters simultaneously across multiple machines. In addition to being easily parallelizable, our algorithm achieves good test accuracy within a short period of time, with a low overall time and memory footprint as demonstrated by empirical results on both single and multi-machine settings. For instance, on a dataset with 93,805 training instances and 12,294 classes, we achieve close to optimal f-score in 10,000 seconds using 2 machines each having 12 cores.

Extracted Key Phrases

10 Figures and Tables

Cite this paper

@article{Raman2016DSMLRED, title={DS-MLR: Exploiting Double Separability for Scaling up Distributed Multinomial Logistic Regression}, author={Parameswaran Raman and Shin Matsushima and Xinhua Zhang and Hyokun Yun and S. V. N. Vishwanathan}, journal={CoRR}, year={2016}, volume={abs/1604.04706} }