Corpus ID: 52010607

An Analysis of Asynchronous Stochastic Accelerated Coordinate Descent

@article{Cole2018AnAO,
  title={An Analysis of Asynchronous Stochastic Accelerated Coordinate Descent},
  author={Richard J. Cole and Yixin Tao},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.05156}
}
Gradient descent, and coordinate descent in particular, are core tools in machine learning and elsewhere. Large problem instances are common. To help solve them, two orthogonal approaches are known: acceleration and parallelism. In this work, we ask whether they can be used simultaneously. The answer is "yes". More specifically, we consider an asynchronous parallel version of the accelerated coordinate descent algorithm proposed and analyzed by Lin, Liu and Xiao (SIOPT'15). We give an analysis… Expand
O C ] 2 5 M ay 2 01 9 Optimal Parallelism Bound for Fully Asynchronous Coordinate Descent with Linear Speedup
When solving massive optimization problems in areas such as machine learning, it is a common practice to seek speedup via massive parallelism. However, especially in an asynchronous environment,Expand
(Near) Optimal Parallelism Bound for Fully Asynchronous Coordinate Descent with Linear Speedup
TLDR
This work improves Liu and Wright's (SIOPT'15) lower bound on the maximum degree of parallelism almost quadratically, and shows that the new bound is almost optimal. Expand
Fully asynchronous stochastic coordinate descent: a tight lower bound on the parallelism achieving linear speedup
TLDR
The new lower bound on the maximum degree of parallelism attaining linear speedup is tight and improves the best prior bound almost quadratically. Expand

References

SHOWING 1-10 OF 21 REFERENCES
Parallel coordinate descent methods for big data optimization
In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convexExpand
Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
TLDR
An asynchronous parallel stochastic proximal coordinate descent algorithm for minimizing a composite objective function, which consists of a smooth convex function added to a separable conveX function, achieves a linear convergence rate on functions that satisfy an optimal strong convexity property and a sublinear rate on general convex functions. Expand
Accelerated, Parallel, and Proximal Coordinate Descent
TLDR
A new randomized coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only, which can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of accelerated coordinate descent. Expand
Asynchronous Coordinate Descent under More Realistic Assumptions
TLDR
It is argued that the convergence of asynchronous-parallel block coordinate descent under more realistic assumptions either fail to hold or will imply less efficient implementations, and is proved always without the independence assumption. Expand
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
TLDR
This paper improves the best known running time of accelerated coordinate descent by a factor up to $n, based on a clean, novel non-uniform sampling that selects each coordinate with a probability proportional to the square root of its smoothness parameter. Expand
An asynchronous parallel stochastic coordinate descent algorithm
We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate onExpand
A2BCD: An Asynchronous Accelerated Block Coordinate Descent Algorithm With Optimal Complexity
TLDR
This paper proves that A2BCD converges linearly to a solution with a fast accelerated rate that matches the recently proposed NU_ACDM, and derive and analyze a second-order ordinary differential equation, which is the continuous-time limit of the algorithm. Expand
Coordinate descent algorithms
TLDR
A certain problem structure that arises frequently in machine learning applications is shown, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type. Expand
Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
  • Y. Lee, Aaron Sidford
  • Mathematics, Computer Science
  • 2013 IEEE 54th Annual Symposium on Foundations of Computer Science
  • 2013
TLDR
This paper shows how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box, and improves the convergence guarantees for Kaczmarz methods. Expand
Accelerating Asynchronous Algorithms for Convex Optimization by Momentum Compensation
TLDR
The experimental results on a shared memory system show that acceleration can lead to significant performance gains on ill-conditioned problems and the first to consider accelerated algorithms that allow updating by delayed gradients and to propose truly accelerated asynchronous algorithms are proposed. Expand
...
1
2
3
...