• Publications
  • Influence
Inexact Successive quadratic approximation for regularized optimization
TLDR
In this work, we present global analysis of the iteration complexity of inexact successive quadratic approximation methods, showing that an inexact solution of the subproblem that is within a fixed multiplicative precision of optimality suffices to guarantee the same order of convergence rate as the exact version, with complexity related in an intuitive way to the measure of inexACTness. Expand
  • 14
  • 2
  • PDF
Random Permutations Fix a Worst Case for Cyclic Coordinate Descent
Variants of the coordinate descent approach for minimizing a nonlinear function are distinguished in part by the order in which coordinates are considered for relaxation. Three common orderings areExpand
  • 22
  • 1
  • PDF
A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization
TLDR
We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving ERM problems with a nonsmooth regularization term. Expand
  • 15
  • PDF
Analyzing random permutations for cyclic coordinate descent
TLDR
We consider coordinate descent methods on convex quadratic problems, in which exact line searches are performed at each iteration. Expand
  • 10
  • PDF
Using Neural Networks to Detect Line Outages from PMU Data
TLDR
We propose an approach based on neural networks and the AC power flow equations to identify single- and double- line outages in a power grid using the information from phasor measurement unit sensors (PMUs). Expand
  • 4
  • PDF
Predicting kinase inhibitors using bioactivity matrix derived informer sets
TLDR
We compare different ways of using chemogenomic data to choose a small informer set of compounds based on previously measured bioactivity data. Expand
  • 3
First-Order Algorithms Converge Faster than $O(1/k)$ on Convex Problems
TLDR
It is well known that both gradient descent and stochastic coordinate descent achieve global convergence rate of $O(1/k)$ in the objective value, when applied to a scheme for minimizing a Lipschitz-continuously differentiable, unconstrained convex function. Expand
  • 3
  • PDF
A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk Minimization
TLDR
We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving empirical risk minimization (ERM) problems with a nonsmooth regularization term. Expand
  • 1
  • PDF
Inexact Variable Metric Stochastic Block-Coordinate Descent for Regularized Optimization
TLDR
This work proposes an inexact randomized block-coordinate descent method based on a regularized quadratic subproblem, in which the quadrastic term can vary from iteration to iteration: a variable metric. Expand
  • 2
  • PDF
Successive Quadratic Approximation for Regularized Optimization
Successive quadratic approximations, or second-order proximal methods, are useful for minimizing functions that are a sum of a smooth part and a convex, possibly nonsmooth part that promotesExpand