Hierarchical surrogate modeling for illumination algorithms

@article{Hagg2017HierarchicalSM,
  title={Hierarchical surrogate modeling for illumination algorithms},
  author={Alexander Hagg},
  journal={Proceedings of the Genetic and Evolutionary Computation Conference Companion},
  year={2017}
}
  • Alexander Hagg
  • Published 29 March 2017
  • Computer Science
  • Proceedings of the Genetic and Evolutionary Computation Conference Companion
Evolutionary illumination is a recent technique that allows producing many diverse, optimal solutions in a map of manually defined features. To support the large amount of objective function evaluations, surrogate model assistance was recently introduced. Illumination models need to represent many more, diverse optimal regions than classical surrogate models. In this PhD thesis, we propose to decompose the sample set, decreasing model complexity, by hierarchically segmenting the training set… 

Figures from this paper

LUCIE: an evaluation and selection method for stochastic problems

TLDR
It is demonstrated that LUCIE can be effectively used as an elitism mechanism in genetic algorithms and evaluated as a selection method for neuroevolution on control policies with stochastic fitness values.

LUCIE

References

SHOWING 1-10 OF 30 REFERENCES

Feature Space Modeling Through Surrogate Illumination

TLDR
The Surrogate-Assisted Illumination algorithm (SAIL), introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.

Data-efficient exploration, optimization, and modeling of diverse designs through surrogate-assisted illumination

TLDR
The Surrogate-Assisted Illumination (SAIL) algorithm, introduced here, integrates approximative models and intelligent sampling of the objective function to minimize the number of evaluations required by MAP-Elites.

A Unifying View of Sparse Approximate Gaussian Process Regression

TLDR
A new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression, relies on expressing the effective prior which the methods are using, and highlights the relationship between existing methods.

Sparse Gaussian Processes using Pseudo-inputs

TLDR
It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.

Multi-objective and semi-supervised heterogeneous classifier ensembles

TLDR
A multi-objective ensemble generation method, which creates a group of members so that the diversity among the base learners could be explicitly maintained, and a novel semi-supervised ensemble learning algorithm, termed Multi-Train, that uses semi- supervised learning algorithms to learn from unlabelled data.

Deep Residual Learning for Image Recognition

TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent in Neural Networks

TLDR
It is suggested that EAs can be made to run significantly faster than previously thought by evaluating individuals only on a small number of training examples per generation, thereby opening up deep learning to all the tools of evolutionary computation.

A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

TLDR
A tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions using the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function.

Curse and Blessing of Uncertainty in Evolutionary Algorithm Using Approximation

TLDR
A study on the effects of uncertainty in the surrogate on SAEA, a notion borrowed from 'curse and blessing of dimensionality' in the work by Donoho (2000), focuses on both the 'curses of uncertainty' and 'blessing of uncertainty', which refers to the benefits of approximation errors on evolutionary search.