• Corpus ID: 211069225

A Scalable Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization

@article{Zhang2020ASE,
  title={A Scalable Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization},
  author={Jiaxin Zhang and Hoang Tran and Dan Lu and Guannan Zhang},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.03001}
}
We propose an improved evolution strategy (ES) using a novel nonlocal gradient operator for high-dimensional black-box optimization. Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators. To control the variance, Gaussian smoothing is usually limited in a small region, so existing ES methods lack nonlocal exploration ability required for escaping from local minima. We develop a… 
A Directional Gaussian Smoothing Optimization Method for Computational Inverse Design in Nanophotonics
TLDR
This work proposes to extend the DGS approach to the constrained inverse design framework in order to find a better design and shows superior performance compared to the state-of-the-art approaches.
A Hybrid Gradient Method to Designing Bayesian Experiments for Implicit Models
TLDR
This work proposes a hybrid gradient approach that leverages recent advances in variational MI estimator and evolution strategies combined with black-box stochastic gradient ascent (SGA) to maximize the MI lower bound.
A Scalable Gradient-Free Method for Bayesian Experimental Design with Implicit Models
TLDR
This paper proposes a novel approach that leverages recent advances in stochastic approximate gradient ascent incorporated with a smoothed variational MI estimator for efficient and robust BED.
Accelerating Reinforcement Learning with a Directional-Gaussian-Smoothing Evolution Strategy
TLDR
A Directional Gaussian Smoothing Evolutionary Strategy (DGS-ES) is employed to accelerate RL training, which is well-suited to address these two challenges with its ability to provide gradient estimates with high accuracy and find nonlocal search direction which lays stress on large-scale variation of the reward function and disregards local fluctuation.
An adaptive stochastic gradient-free approach for high-dimensional blackbox optimization
In this work, we propose a novel adaptive stochastic gradient-free (ASGF) approach for solving high-dimensional nonconvex optimization problems based on function evaluations. We employ a directional
Deep Reinforcement Learning Versus Evolution Strategies: A Comparative Survey
TLDR
An overview of how DRL and ESs can be used, either independently or in unison, to solve specific learning tasks is presented and is intended to guide researchers to select which method suits them best and provides a bird's eye view of the overall literature in the field.
A Nonlocal-Gradient Descent Method for Inverse Design in Nanophotonics
Local-gradient-based optimization approaches lack nonlocal exploration ability required for escaping from local minima when searching non-convex landscapes. A directional Gaussian smoothing (DGS)
A nonlocal optimization method for computational inverse design in nanophotonics
TLDR
By incorporating volume constraints into optimization, the optimized design using the nonlocal method achieves an equivalently high performance but significantly reduces the amount of material usage in wavelength demultiplexer design.

References

SHOWING 1-10 OF 68 REFERENCES
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
TLDR
This work explores the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients, and highlights several advantages of ES as a blackbox optimization technique.
Random Gradient-Free Minimization of Convex Functions
TLDR
New complexity bounds for methods of convex optimization based only on computation of the function value are proved, which appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables.
Guided evolutionary strategies: augmenting random search with surrogate gradients
TLDR
This work proposes Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search, and defines a search distribution for evolutionary strategies that is elongated along a guiding subspace spanned by the surrogate gradients.
Geometrically Coupled Monte Carlo Sampling
TLDR
This work improves current methods for sampling in Euclidean spaces by avoiding independence, and instead considers ways to couple samples, showing fundamental connections to optimal transport theory, leading to novel sampling algorithms and providing new theoretical grounding for existing strategies.
Giga-voxel computational morphogenesis for structural design
TLDR
A computational morphogenesis tool, implemented on a supercomputer, that produces designs with giga-voxel resolution that provides insights into the optimal distribution of material within a structure that were hitherto unachievable owing to the challenges of scaling up existing modelling and optimization frameworks is reported.
A restart CMA evolution strategy with increasing population size
  • A. Auger, N. Hansen
  • Mathematics, Computer Science
    2005 IEEE Congress on Evolutionary Computation
  • 2005
TLDR
The IPOP-CMA-ES is evaluated on the test suit of 25 functions designed for the special session on real-parameter optimization of CEC 2005, where the population size is increased for each restart (IPOP).
Online convex optimization in the bandit setting: gradient descent without a gradient
TLDR
It is possible to use gradient descent without seeing anything more than the value of the functions at a single point, and the guarantees hold even in the most general case: online against an adaptive adversary.
Introductory Lectures on Convex Optimization
  • 2004
A Class of Globally Convergent Optimization Methods Based on Conservative Convex Separable Approximations
  • K. Svanberg
  • Computer Science, Mathematics
    SIAM J. Optim.
  • 2002
TLDR
This paper deals with a certain class of optimization methods, based on conservative convex separable approximations (CCSA), for solving inequality-constrained nonlinear programming problems, and it is proved that the sequence of iteration points converges toward the set of Karush--Kuhn--Tucker points.
Asymptotically Compatible Schemes for Robust Discretization of Parametrized Problems with Applications to Nonlocal Models
Many problems in nature, being characterized by a parameter, are of interest both with a fixed parameter value and with the parameter approaching an asymptotic limit. Numerical schemes that are con...
...
1
2
3
4
5
...