• Corpus ID: 15530110

Simple Complexity Analysis of Simplified Direct Search

@article{Konevcny2014SimpleCA,
  title={Simple Complexity Analysis of Simplified Direct Search},
  author={Jakub Konevcn'y and Peter Richt{\'a}rik},
  journal={arXiv: Optimization and Control},
  year={2014}
}
We consider the problem of unconstrained minimization of a smooth function in the derivative-free setting using. In particular, we propose and study a simplified variant of the direct search method (of direction type), which we call simplified direct search (SDS). Unlike standard direct search methods, which depend on a large number of parameters that need to be tuned, SDS depends on a single scalar parameter only. Despite relevant research activity in direct search methods spanning several… 

Tables from this paper

Direct-Search for a Class of Stochastic Min-Max Problems
TLDR
This work designs a novel algorithm in the context of min-max saddle point games where one sequentially updates the min and the max player and proves convergence of this algorithm under mild assumptions, the first one to address the convergence of a direct-search method for min- max objectives in a stochastic setting.
Worst-Case Complexity Bounds of Directional Direct-Search Methods for Multiobjective Optimization
TLDR
This work focuses on a particular instance of Direct Multisearch, which considers a more strict criterion for accepting new nondominated points, and establishes a better worst-case complexity bound, simply proportional to the square of the inverse of the threshold, for driving the same criticality measure below the considered threshold.
On the optimal order of worst case complexity of direct search
TLDR
It is proved that such a factor of n2 is optimal in these worst case complexity bounds, in the sense that no other positive spanning set will yield a better order of n.
A second-order globally convergent direct-search method and its worst-case complexity
TLDR
A weak second-order convergence result related to a criticality measure defined along the directions used throughout the iterations of a general class of direct-search methods is proved.
Efficient global unconstrained black box optimization
For the unconstrained optimization of black box functions, this paper presents a new stochastic algorithm called VSBBO. In practice, VSBBO matches the quality of other state-of-the-art algorithms for
Stochastic Three Points Method for Unconstrained Smooth Minimization
TLDR
This paper designs a novel randomized derivative-free algorithm --- the stochastic three points (STP) method --- and analyzes its iteration complexity, studying non-convex, convex and strongly convex cases.
Worst-case complexity bounds of directional direct-search methods for multiobjective derivative-free optimization.
TLDR
This work analyzes the worst-case complexity of Direct Multisearch in its most general formulation for unconstrained optimization, and focuses on a particular instance of DMS, which considers a more strict criterion for accepting new nondominated points.
Derivative-free optimization methods
TLDR
This work overviews the primary setting of deterministic methods applied to unconstrained, non-convex optimization problems where the objective function is defined by a deterministic black-box oracle, and discusses developments in randomized methods, methods that assume some additional structure about the objective and methods for handling different types of constraints.
A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control
TLDR
This paper proposes the first derivative free optimization method with importance sampling and derive new improved complexity results on non-convex, convex and strongly convex functions and test the method on a collection of continuous control tasks on MuJoCo environments with varying difficulty.
Global Linear Convergence of Evolution Strategies on More Than Smooth Strongly Convex Functions
TLDR
Almost sure linear convergence and a bound on the expected hitting time of an ES are established, namely the (1 + 1)-ES with (generalized) one-fifth success rule and an abstract covariance matrix adaptation with bounded condition number, on a broad class of functions.
...
...

References

SHOWING 1-10 OF 25 REFERENCES
Smoothing and worst-case complexity for direct-search methods in nonsmooth optimization
TLDR
Some preliminary numerical experience indicates that the proposed class of smoothing direct-search methods leads to better values of the objective function, pushing in some cases the optimization further, apparently without an additional cost in the number of function evaluations.
Worst case complexity of direct search
  • L. Vicente
  • Mathematics, Computer Science
    EURO J. Comput. Optim.
  • 2013
In this paper, we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease to accept new iterates shares the worst case complexity bound of
Worst case complexity of direct search under convexity
In this paper we prove that the broad class of direct-search methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same worst case complexity bound
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
TLDR
This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited, then turns to a broad class of methods for which the underlying principles allow general-ization to handle bound constraints and linear constraints.
On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization
The (optimal) function/gradient evaluations worst-case complexity analysis available for the adaptive regularization algorithms with cubics (ARC) for nonconvex smooth unconstrained optimization is
Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
TLDR
Focusing on nonasymptotic bounds on convergence rates, it is shown that if pairs of function values are available, algorithms for d-dimensional optimization that use gradient estimates based on random perturbations suffer a factor of at most √d in convergence rate over traditional stochastic gradient methods.
Direct Search Based on Probabilistic Descent
TLDR
This paper analyzes direct-search algorithms when the polling directions are probabilistic descent, meaning that with a certain probability at least one of them is of descent type, and shows a global decaying rate of $1/\sqrt{k}$ for the gradient size, with overwhelmingly high probability, matching the corresponding rate for the deterministic versions of the gradient method or of direct search.
Query Complexity of Derivative-Free Optimization
This paper provides lower bounds on the convergence rate of Derivative Free Optimization (DFO) with noisy function evaluations, exposing a fundamental and unavoidable gap between the performance of
Finding Optimal Algorithmic Parameters Using Derivative-Free Optimization
TLDR
A general framework for identifying locally optimal algorithmic parameters in unconstrained optimization is devised and the derivative-free method chosen to guide the process is the mesh adaptive direct search, a generalization of pattern search methods.
Mesh Adaptive Direct Search Algorithms for Constrained Optimization
TLDR
The main result of this paper is that the general MADS framework is flexible enough to allow the generation of an asymptotically dense set of refining directions along which the Clarke derivatives are nonnegative.
...
...