Stochastic versions of OneMax and LeadingOnes are considered and the performance of evolutionary algorithms with and without populations on these problems are analyzed, finding that even small populations can make evolutionary algorithms perform well for high noise levels, well outside the abilities of the ($1+1$$1-1) EA.Expand

The concept of graceful scaling is introduced in which the run time of an algorithm scales polynomially with noise intensity, and it is shown that a simple EDA called the compact genetic algorithm can overcome the shortsightedness of mutation-only heuristics to scale gracefully with noise.Expand

MenuOptimizer supports designers' abilities to cope with uncertainty and recognize good solutions, and allows designers to delegate combinatorial problems to the optimizer, which should solve them quickly enough without disrupting the design process.Expand

This work proposes a slightly different ant optimizer to deal with noise, and proves that under mild conditions, it finds the paths with shortest expected length efficiently, despite the fact that it does not have convergence in the classic sense.Expand

This work analyzes a variant of MMAS that works with fitness-proportional update on stochastic-weight graphs with arbitrary random edge weights from [0,1] and proves the multiplicative and the variable drift theorem are adapted to continuous search spaces.Expand

For each of the two language classes they are considered, an efficient algorithm is given returning a minimal generalization from the given finite sample to an element of the fixed language class; such generalizations are called descriptive.Expand

For each of the two language classes, an efficient algorithm is given returning a minimal generalization from the given finite sample to an element of the fixed language class; such generalizations are called descriptive.Expand

This work proves the first, almost matching, lower bounds for this setting, and shows that, for LeadingOnes, the (1 + 1) EA with any mutation operator treating zeros and ones equally has an expected run time of ω(n2 log(n) log log( n) ... log(s)(n)) when facing problem size n.Expand