We show that when the associated optimization problem is sparse, meaning most gradient updates only modify a small part of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence.Expand

We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer.Expand

The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection.Expand

We investigate the potential for significant power savings in operational networks by including power-awareness in the design and configuration of networks, and in the implementation of network protocols.Expand

This paper describes the fundamentals of the coordinate descent approach, together with variants and extensions and their convergence properties, mostly with reference to convex objectives.Expand

We present a structured interior-point method for the efficient solution of the optimal control problem in model predictive control. The cost of this approach is linear in the horizon length,… Expand

We describe an asynchronous parallel stochastic proximal coordinate descent algorithm for minimizing a composite objective function, which consists of a smooth convex function added to a separable convexfunction.Expand