• Publications
  • Influence
A derivative-free trust-region algorithm for composite nonsmooth optimization
The derivative-free trust-region algorithm proposed by Conn et al. (SIAM J Optim 20:387–415, 2009) is adapted to the problem of minimizing a composite function $$\varPhiExpand
Tensor Methods for Finding Approximate Stationary Points of Convex Functions.
In this paper we consider the problem of finding $\epsilon$-approximate stationary points of convex functions that are $p$-times differentiable with $\nu$-H\"{o}lder continuous $p$th derivatives. WeExpand
Tensor Methods for Minimizing Functions with H\"{o}lder Continuous Higher-Order Derivatives
In this paper we study $p$-order methods for unconstrained minimization of convex functions that are $p$-times differentiable with $\nu$-Holder continuous $p$th derivatives. We propose tensor schemesExpand
Nonlinear Stepsize Control Algorithms: Complexity Bounds for First- and Second-Order Optimality
TLDR
A nonlinear stepsize control framework for unconstrained optimization, generalizing several trust-region and regularization algorithms. Expand
On inexact solution of auxiliary problems in tensor methods for convex optimization
TLDR
We study the auxiliary problems that appear in p-order tensor methods for unconstrained minimization of convex functions with ν-Hölder continuous pth derivatives. Expand
Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
TLDR
In this paper, we study the regularized second-order methods for unconstrained minimization of a twice-differentiable (convex) objective function. Expand
Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions
TLDR
We study accelerated Regularized Newton Methods for minimizing objectives formed as a sum of two functions: one is convex and twice differentiable with Holder-continuous Hessian, and the other is a simple closed convex function. Expand
Globally Convergent Second-order Schemes for Minimizing Twice-differentiable Functions
In this paper, we suggest new universal second-order methods for unconstrained minimization of twice-differentiable (convex or non-convex) objective function. For the current function, these methodsExpand
On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
TLDR
A Nonlinear Stepsize Control (NSC) framework has been proposed by Toint [Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization]. Expand
On the Complexity of an Augmented Lagrangian Method for Nonconvex Optimization
In this paper we study the worst-case complexity of an inexact Augmented Lagrangian method for nonconvex constrained problems. Assuming that the penalty parameters are bounded, we prove a complexityExpand
...
1
2
...