• Publications
  • Influence
Smooth minimization of nonsmooth functions with parallel coordinate descent methods
TLDR
We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions. Expand
  • 131
  • 25
  • PDF
Accelerated, Parallel, and Proximal Coordinate Descent
TLDR
We propose a new randomized coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only. Expand
  • 290
  • 21
  • PDF
SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization
TLDR
We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Expand
  • 70
  • 15
  • PDF
Mind the duality gap: safer rules for the Lasso
TLDR
Screening rules allow to early discard irrelevant variables from the optimization in Lasso problems, or its derivatives, making solvers faster. Expand
  • 91
  • 14
  • PDF
Gap Safe screening rules for sparsity enforcing penalties
TLDR
In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term. Expand
  • 52
  • 7
  • PDF
GAP Safe screening rules for sparse multi-task and multi-class models
TLDR
In this paper we derive new safe rules for generalized linear models regularized with l1 and l1/ l2 norms. Expand
  • 51
  • 6
  • PDF
Joint quantile regression in vector-valued RKHSs
TLDR
This paper introduces a novel framework for joint quantile regression, which is based on vectorvalued RKHSs. Expand
  • 20
  • 5
  • PDF
Fast distributed coordinate descent for non-strongly convex losses
TLDR
We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions with O(1/k2) convergence rate. Expand
  • 59
  • 4
  • PDF
A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
TLDR
We propose a new first-order primal-dual optimization framework for convex optimization template with broad applications. Expand
  • 48
  • 4
  • PDF
Adaptive restart of accelerated gradient methods under local quadratic growth condition
By analyzing accelerated proximal gradient methods under a local quadratic growth condition, we show that restarting these algorithms at any frequency gives a globally linearly convergent algorithm.Expand
  • 26
  • 4
  • PDF