• Publications
  • Influence
Robustness of Populations in Stochastic Environments
TLDR
Stochastic versions of OneMax and LeadingOnes are considered and the performance of evolutionary algorithms with and without populations on these problems are analyzed, finding that even small populations can make evolutionary algorithms perform well for high noise levels, well outside the abilities of the ($1+1$$1-1) EA.
ACO Beats EA on a Dynamic Pseudo-Boolean Function
TLDR
A bit string optimization problem is constructed where it is shown that the ACO system is able to follow the optimum while the EA gets lost.
The Compact Genetic Algorithm is Efficient Under Extreme Gaussian Noise
TLDR
The concept of graceful scaling is introduced in which the run time of an algorithm scales polynomially with noise intensity, and it is shown that a simple EDA called the compact genetic algorithm can overcome the shortsightedness of mutation-only heuristics to scale gracefully with noise.
MenuOptimizer: interactive optimization of menu systems
TLDR
MenuOptimizer supports designers' abilities to cope with uncertainty and recognize good solutions, and allows designers to delegate combinatorial problems to the optimizer, which should solve them quickly enough without disrupting the design process.
Ants easily solve stochastic shortest path problems
TLDR
This work proposes a slightly different ant optimizer to deal with noise, and proves that under mild conditions, it finds the paths with shortest expected length efficiently, despite the fact that it does not have convergence in the classic sense.
EDAs cannot be Balanced and Stable
TLDR
It is proved that no EDA can be both balanced and stable, and a stable EDA is given which optimizes LeadingOnes within a time of O(n log n).
Optimizing expected path lengths with ant colony optimization using fitness proportional update
TLDR
This work analyzes a variant of MMAS that works with fitness-proportional update on stochastic-weight graphs with arbitrary random edge weights from [0,1] and proves the multiplicative and the variable drift theorem are adapted to continuous search spaces.
Fast Learning of Restricted Regular Expressions and DTDs
TLDR
For each of the two language classes they are considered, an efficient algorithm is given returning a minimal generalization from the given finite sample to an element of the fixed language class; such generalizations are called descriptive.
Fast learning of restricted regular expressions and DTDs
TLDR
For each of the two language classes, an efficient algorithm is given returning a minimal generalization from the given finite sample to an element of the fixed language class; such generalizations are called descriptive.
Unknown solution length problems with no asymptotically optimal run time
TLDR
This work proves the first, almost matching, lower bounds for this setting, and shows that, for LeadingOnes, the (1 + 1) EA with any mutation operator treating zeros and ones equally has an expected run time of ω(n2 log(n) log log( n) ... log(s)(n)) when facing problem size n.
...
1
2
3
4
5
...