• Corpus ID: 233004308

Limiting behaviour of the generalized simplex gradient as the number of points tends to infinity on a fixed shape in R^n

@article{Hare2021LimitingBO,
  title={Limiting behaviour of the generalized simplex gradient as the number of points tends to infinity on a fixed shape in R^n},
  author={Warren Hare and Gabriel Jarry-Bolduc and Chayne Planiden},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.00748}
}
This work investigates the asymptotic behaviour of the gradient approximation method called the generalized simplex gradient (GSG). This method has an error bound that at first glance seems to tend to infinity as the number of sample points increases, but with some careful construction, we show that this is not the case. For functions in finite dimensions, we present two new error bounds ad infinitum depending on the position of the reference point. The error bounds are not a function of the… 

Figures from this paper

References

SHOWING 1-10 OF 16 REFERENCES
Limiting behavior of derivative approximation techniques as the number of points tends to infinity on a fixed interval in R
TLDR
Under reasonable assumptions, it is proved that the absolute errors remain bounded as the number of sample points increases to infinity on a fixed interval.
Error bounds for overdetermined and underdetermined generalized centred simplex gradients
TLDR
This work develops error bounds and shows that the error bounds have order $O(\Delta^2)$, where $\Delta$ is the radius of the sample set of points used.
Geometry of interpolation sets in derivative free optimization
TLDR
The bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods and this constant is related to the condition number of a certain matrix.
The calculus of simplex gradients
  • R. Regis
  • Mathematics, Computer Science
    Optim. Lett.
  • 2015
TLDR
The simplex gradient is treated as a linear operator and formulas for the simplex gradients of products and quotients of two multivariable functions and a power rule for simplexGradients are provided.
Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
TLDR
A finite difference quasi-Newton method for the minimization of noisy functions that takes advantage of the scalability and power of BFGS updating, and employs an adaptive procedure for choosing the differencing interval based on the noise estimation techniques of Hamming and Mor\'e and Wild.
How to Integrate A Polynomial Over A Sphere
  • G. Folland
  • Mathematics, Computer Science
    Am. Math. Mon.
  • 2001
Several recent articles in the MONTHLY ([1], [2], [4]) have involved finding the area of n-dimensional balls or spheres or integrating polynomials over such sets. None of these articles, however,
Iterative methods for optimization
  • C. Kelley
  • Mathematics, Computer Science
    Frontiers in applied mathematics
  • 1999
TLDR
Iterative Methods for Optimization does more than cover traditional gradient-based optimization: it is the first book to treat sampling methods, including the Hooke& Jeeves, implicit filtering, MDS, and Nelder& Mead schemes in a unified way.
A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling
TLDR
A derivative-free algorithm that computes trial points from the minimization of a regression model of the noisy function f over a trust region according to an adaptive multiple importance sampling strategy.
The BOBYQA algorithm for bound constrained optimization without derivatives
BOBYQA is an iterative algorithm for finding a minimum of a function F(x), x2R n , subject to bounds axb on the variables, F being specified by a "black box" that returns the value F(x) for any
On the construction of quadratic models for derivative-free trust-region algorithms
TLDR
It is shown that the second condition trivially holds if the model is constructed by polynomial interpolation, since in this case the model coincides with the objective function at the sample set, and it also holds for models constructed by support vector regression.
...
1
2
...