• Publications
  • Influence
Introduction to Derivative-Free Optimization
TLDR
This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems, in the first contemporary comprehensive treatment of optimization without derivatives.
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
TLDR
A StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems is proposed, and a linear convergence rate is proven under strong convexity assumption.
Efficient SVM Training Using Low-Rank Kernel Representations
TLDR
This work shows that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity and derives an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors).
Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points
TLDR
This paper proves global convergence for first- and second-order stationary points of a class of derivative-free trust-region methods for unconstrained optimization based on the sequential minimization of quadratic models built from evaluating the objective function at sample sets.
Recent progress in unconstrained nonlinear optimization without derivatives
TLDR
An introduction to a new class of derivative free methods for unconstrained optimization in the context of a trust region framework that focuses on techniques that ensure a suitable “geometric quality” of the considered models.
Sparse Inverse Covariance Selection via Alternating Linearization Methods
TLDR
This paper proposes a first-order method based on an alternating linearization technique that exploits the problem's special structure; in particular, the subproblems solved in each iteration have closed-form solutions.
Fast alternating linearization methods for minimizing the sum of two convex functions
TLDR
Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University, 2009) where the algorithms are Jacobi type methods.
Stochastic optimization using a trust-region method and random models
TLDR
A trust-region model-based algorithm for solving unconstrained stochastic optimization problems that utilizes random models of an objective function f(x), obtained from stochastically observations of the function or its gradient.
Efficient block-coordinate descent algorithms for the Group Lasso
TLDR
A general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem exactly is proposed and is more efficient in practice than the one implemented in Tseng and Yun.
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization
TLDR
This paper studies and analyzes the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses and provides a sublinear convergence rate and a linear convergence rate for gradient dominated functions.
...
1
2
3
4
5
...