Corpus ID: 1234305

SCOPE: Scalable Composite Optimization for Learning on Spark

@article{Zhao2017SCOPESC,
  title={SCOPE: Scalable Composite Optimization for Learning on Spark},
  author={Shen-Yi Zhao and R. Xiang and Y. Shi and P. Gao and W. Li},
  journal={ArXiv},
  year={2017},
  volume={abs/1602.00133}
}
  • Shen-Yi Zhao, R. Xiang, +2 authors W. Li
  • Published 2017
  • Computer Science, Mathematics
  • ArXiv
  • Many machine learning models, such as logistic regression~(LR) and support vector machine~(SVM), can be formulated as composite optimization problems. Recently, many distributed stochastic optimization~(DSO) methods have been proposed to solve the large-scale composite optimization problems, which have shown better performance than traditional batch methods. However, most of these DSO methods are not scalable enough. In this paper, we propose a novel DSO method, called \underline{s}calable… CONTINUE READING
    8 Citations
    Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate
    • 1
    • PDF
    Proximal SCOPE for Distributed Sparse Learning
    • 3
    • PDF
    BASGD: Buffered Asynchronous SGD for Byzantine Learning
    • PDF
    Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data
    • 9
    • Highly Influenced
    • PDF
    Asynchronous Stochastic Gradient Descent and Its Lock-Free Convergence Guarantee
    • PDF
    Distributed Learning of Non-convex Linear Models with One Round of Communication
    • 2
    • PDF
    Distributed Learning of Neural Networks with One Round of Communication
    • 1
    • PDF
    Variance Reduction for Distributed Stochastic Gradient Descent
    • 5
    • PDF

    References

    SHOWING 1-10 OF 47 REFERENCES
    Large-scale logistic regression and linear support vector machines using spark
    • 72
    • PDF
    Efficient Distributed SGD with Variance Reduction
    • Soham De, T. Goldstein
    • Computer Science, Mathematics
    • 2016 IEEE 16th International Conference on Data Mining (ICDM)
    • 2016
    • 32
    • PDF
    Petuum: A New Platform for Distributed Machine Learning on Big Data
    • 199
    • Highly Influential
    • PDF
    Deep learning with Elastic Averaging SGD
    • 385
    • PDF
    Distributed Stochastic ADMM for Matrix Factorization
    • 19
    • PDF
    Adding vs. Averaging in Distributed Primal-Dual Optimization
    • 138
    • Highly Influential
    • PDF
    Spark: Cluster Computing with Working Sets
    • 4,328
    • PDF
    Asynchronous Distributed Semi-Stochastic Gradient Optimization
    • 17
    • PDF
    On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants
    • 163
    • PDF