# Non-local optimization: imposing structure on optimization problems by relaxation

@article{Mller2021NonlocalOI, title={Non-local optimization: imposing structure on optimization problems by relaxation}, author={Nils M{\"u}ller and Tobias Glasmachers}, journal={Proceedings of the 16th ACM/SIGEVO Conference on Foundations of Genetic Algorithms}, year={2021} }

In stochastic optimization, particularly in evolutionary computation and reinforcement learning, the optimization of a function f : Ω → R is often addressed through optimizing a so-called relaxation θ ϵ Θ → Eθ(f) of f, where Θ resembles the parameters of a family of probability measures on Ω. We investigate the structure of such relaxations by means of measure theory and Fourier analysis, enabling us to shed light on the success of many associated stochastic optimization methods. The main…

## Figures from this paper

## One Citation

### Fast Moving Natural Evolution Strategy for High-Dimensional Problems

- Computer Science2022 IEEE Congress on Evolutionary Computation (CEC)
- 2022

The proposed CR-FM-NES extends a recently proposed state-of-the-art NES, Fast Moving Natural Evolution Strategy, in order to be applicable in high-dimensional problems, and builds on an idea using a restricted representation of a covariance matrix instead of using a full covariances matrix, while inheriting an efficiency of FM-NES.

## References

SHOWING 1-10 OF 18 REFERENCES

### Convex Optimization: Algorithms and Complexity

- Computer ScienceFound. Trends Mach. Learn.
- 2015

This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms and provides a gentle introduction to structural optimization with FISTA, saddle-point mirror prox, Nemirovski's alternative to Nesterov's smoothing, and a concise description of interior point methods.

### Convergence Analysis of Evolutionary Algorithms That Are Based on the Paradigm of Information Geometry

- Computer ScienceEvolutionary Computation
- 2014

It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer, and it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit—up to a scalar factor—the inverse of the Hessian of the subjective function considered.

### A Theoretical Analysis of Optimization by Gaussian Continuation

- Computer ScienceAAAI
- 2015

A theoretical analysis is provided that provides a bound on the endpoint solution of the continuation method and it is shown that this characteristic can be analytically computed when the objective function is expressed in some suitable basis functions.

### Random Gradient-Free Minimization of Convex Functions

- Computer Science, MathematicsFound. Comput. Math.
- 2017

New complexity bounds for methods of convex optimization based only on computation of the function value are proved, which appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables.

### A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling

- Computer ScienceSIAM J. Optim.
- 2018

A derivative-free algorithm that computes trial points from the minimization of a regression model of the noisy function f over a trust region according to an adaptive multiple importance sampling strategy.

### Gaussian Smoothing and Asymptotic Convexity

- Computer Science
- 2012

A formal definition for the functions that can eventually become convex by smoothing is presented and a closed-form expression for the minimizer of the resulted smoothed function, when it satisfies certain decay conditions is presented.

### Large Scale Black-Box Optimization by Limited-Memory Matrix Adaptation

- Computer ScienceIEEE Transactions on Evolutionary Computation
- 2019

The limited-memory MA-ES is presented, a popular method to deal with nonconvex and/or stochastic optimization problems when gradient information is not available, and demonstrates state-of-the-art performance on a set of established large-scale benchmarks.

### Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles

- Computer ScienceJ. Mach. Learn. Res.
- 2017

A canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information-geometric optimization (IGO) method, which achieves maximal invariance properties.

### On the Link between Gaussian Homotopy Continuation and Convex Envelopes

- Computer ScienceEMMCVPR
- 2014

It is proved that Gaussian smoothing emerges from the best affine approximation to Vese’s nonlinear PDE, hence providing the optimal convexification.

### Natural Evolution Strategies

- Computer Science2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)
- 2008

NES is presented, a novel algorithm for performing real-valued dasiablack boxpsila function optimization: optimizing an unknown objective function where algorithm-selected function measurements constitute the only information accessible to the method.