# Global optimization using random embeddings

@article{Cartis2021GlobalOU, title={Global optimization using random embeddings}, author={Coralia Cartis and Estelle M. Massart and Adilet Otemissov}, journal={ArXiv}, year={2021}, volume={abs/2107.12102} }

We propose a random-subspace algorithmic framework for global optimization of Lipschitzcontinuous objectives, and analyse its convergence using novel tools from conic integral geometry. X-REGO randomly projects, in a sequential or simultaneous manner, the highdimensional original problem into low-dimensional subproblems that can then be solved with any global, or even local, optimization solver. We estimate the probability that the randomly-embedded subproblem shares (approximately) the same… Expand

#### Figures and Tables from this paper

#### References

SHOWING 1-10 OF 72 REFERENCES

A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionality

- Computer Science, Mathematics
- 2020

This work provides novel probabilistic bounds for the success of REGO in solving the original, low effective-dimensionality problem, which show its independence of the (potentially large) ambient dimension and its precise dependence on the dimensions of the effective and randomly embedding subspaces. Expand

Optimization of Convex Functions with Random Pursuit

- Mathematics, Computer Science
- SIAM J. Optim.
- 2013

A general comparison of the experimental results reveals that standard Random Pursuit is effective on strongly convex functions with moderate condition number, and the accelerated scheme is comparable to Nesterov's fast gradient method and outperforms adaptive step-size strategies. Expand

On the choice of the low-dimensional domain for global optimization via random embeddings

- Mathematics, Computer Science
- J. Glob. Optim.
- 2020

This work describes a minimal low-dimensional set in correspondence with the embedded search space and shows that an alternative equivalent embedding procedure yields simultaneously a simpler definition of the low- dimensional minimal set and better properties in practice. Expand

High-Dimensional Optimization in Adaptive Random Subspaces

- Computer Science, Mathematics
- NeurIPS
- 2019

It is shown that an adaptive sampling strategy for the random subspace significantly outperforms the oblivious sampling method, and the improvement in the relative error of the solution can be tightly characterized in terms of the spectrum of the data matrix, and provide probabilistic upper-bounds. Expand

Derivative-Free Optimization of High-Dimensional Non-Convex Functions by Sequential Random Embeddings

- Mathematics, Computer Science
- IJCAI
- 2016

This paper describes the properties of random embedding for high-dimensional problems with low optimal e-effective dimensions, and proposes the sequential random embeddings (SRE) to reduce the embedding gap while running optimization algorithms in the low-dimensional spaces. Expand

Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces

- Computer Science, Mathematics
- ICML
- 2019

This work proposes an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently and is the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings. Expand

Stochastic Subspace Descent

- Mathematics
- 2019

We present two stochastic descent algorithms that apply to unconstrained optimization and are particularly efficient when the objective function is slow to evaluate and gradients are not easily… Expand

Bayesian Optimization in a Billion Dimensions via Random Embeddings

- Computer Science, Mathematics
- J. Artif. Intell. Res.
- 2016

Empirical results confirm that REMBO can effectively solve problems with billions of dimensions, provided the intrinsic dimensionality is low, and show thatREMBO achieves state-of-the-art performance in optimizing the 47 discrete parameters of a popular mixed integer linear programming solver. Expand

Random Gradient-Free Minimization of Convex Functions

- Mathematics, Computer Science
- Found. Comput. Math.
- 2017

New complexity bounds for methods of convex optimization based only on computation of the function value are proved, which appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. Expand

Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence

- Mathematics, Computer Science
- SIAM J. Optim.
- 2017

A randomized second-order method for optimization known as the Newton Sketch, based on performing an approximate Newton step using a randomly projected or sub-sampled Hessian, is proposed, which has super-linear convergence with exponentially high probability and convergence and complexity guarantees that are independent of condition numbers and related problem-dependent quantities. Expand