Corpus ID: 236428551

Full-low evaluation methods for derivative-free optimization

@inproceedings{Berahas2021FulllowEM,
  title={Full-low evaluation methods for derivative-free optimization},
  author={Albert S. Berahas and Oumaima Sohab and Lu{\'i}s Nunes Vicente},
  year={2021}
}
We propose a new class of rigorous methods for derivative-free optimization with the aim of delivering efficient and robust numerical performance for functions of all types, from smooth to non-smooth, and under different noise regimes. To this end, we have developed Full-Low Evaluation methods, organized around two main types of iterations. The first iteration type is expensive in function evaluations, but exhibits good performance in the smooth and non-noisy cases. For the theory, we consider… Expand

Figures from this paper

References

SHOWING 1-10 OF 46 REFERENCES
A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization
In this paper, we analyze several methods for approximating the gradient of a function using only function values. These methods include finite differences, linear interpolation, Gaussian smoothingExpand
On the Numerical Performance of Derivative-Free Optimization Methods Based on Finite-Difference Approximations
The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement andExpand
Direct Search Based on Probabilistic Descent
TLDR
This paper analyzes direct-search algorithms when the polling directions are probabilistic descent, meaning that with a certain probability at least one of them is of descent type, and shows a global decaying rate of $1/\sqrt{k}$ for the gradient size, with overwhelmingly high probability, matching the corresponding rate for the deterministic versions of the gradient method or of direct search. Expand
Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
TLDR
A finite difference quasi-Newton method for the minimization of noisy functions that takes advantage of the scalability and power of BFGS updating, and employs an adaptive procedure for choosing the differencing interval based on the noise estimation techniques of Hamming and Mor\'e and Wild. Expand
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
TLDR
This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited, then turns to a broad class of methods for which the underlying principles allow general-ization to handle bound constraints and linear constraints. Expand
Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
TLDR
Focusing on nonasymptotic bounds on convergence rates, it is shown that if pairs of function values are available, algorithms for d-dimensional optimization that use gradient estimates based on random perturbations suffer a factor of at most √d in convergence rate over traditional stochastic gradient methods. Expand
Geometry of interpolation sets in derivative free optimization
TLDR
The bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods and this constant is related to the condition number of a certain matrix. Expand
Implicit Filtering
TLDR
This book describes the algorithm, its convergence theory, and a new MATLAB implementation, and includes three case studies, and is the only one in the area of derivative-free or sampling methods and is accompanied by publicly available software. Expand
Derivative-Free Optimization of Expensive Functions with Computational Error Using Weighted Regression
TLDR
A heuristic weighting scheme is proposed that simultaneously handles differing levels of uncertainty in function evaluations and errors induced by poor model fidelity and it is reported that weighted regression appears to outperform interpolation and regression models on nondifferentiable functions and functions with deterministic noise. Expand
Random Gradient-Free Minimization of Convex Functions
TLDR
New complexity bounds for methods of convex optimization based only on computation of the function value are proved, which appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. Expand
...
1
2
3
4
5
...