<lb>We study the fundamental limits to communication-efficient distributed methods for convex learning<lb>and optimization, under different assumptions on the information available to individualâ€¦ (More)

We develop a novel framework to study smooth and strongly convex optimization algorithms, both deterministic and stochastic. Focusing on quadratic functions we are able to examine optimizationâ€¦ (More)

In this thesis we develop a novel framework to study smooth and strongly convex optimization algorithms, both deterministic and stochastic. Focusing on quadratic functions we are able to examineâ€¦ (More)

Finite-sum optimization problems are ubiquitous in machine learning, and are commonly solved using first-order methods which rely on gradient computations. Recently, there has been growing interestâ€¦ (More)

We consider a broad class of first-order optimization algorithms which are oblivious, in the sense that their step sizes are scheduled regardless of the function under consideration, except forâ€¦ (More)

We develop a novel framework to study smooth and strongly convex optimization algorithms. Focusing on quadratic functions we are able to examine optimization algorithms as a recursive application ofâ€¦ (More)

Many canonical machine learning problems boil down to a convex optimization problem with a finite sum structure. However, whereas much progress has been made in developing faster algorithms for thisâ€¦ (More)

We study the conditions under which one is able to efficiently apply variancereduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, theâ€¦ (More)

We study the conditions under which one is able to efficiently apply variance-reduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, theâ€¦ (More)