Christian Gießen

Learn More
The (1+1) EA with mutation probability <i>c/n</i>, where <i>c</i>&#62;0 is an arbitrary constant, is studied for the classical OneMax function. Its expected optimization time is analyzed exactly (up to lower order terms) as a function of <i>c</i> and &#955;. It turns out that 1/<i>n</i> is the only optimal mutation probability if &#955;=<i>o</i>(ln <i>n</i>(More)
We consider stochastic versions of OneMax and LeadingOnes and analyze the performance of evolutionary algorithms with and without populations on these problems. It is known that the (1+1) EA on OneMax performs well in the presence of very small noise, but poorly for higher noise levels. We extend these results to LeadingOnes and to many different noise(More)
The ( $$1+\lambda $$ 1 + λ ) EA with mutation probability c / n, where $$c>0$$ c > 0 is an arbitrary constant, is studied for the classical OneMax function. Its expected optimization time is analyzed exactly (up to lower order terms) as a function of c and $$\lambda $$ λ . It turns out that 1 / n is the only optimal mutation probability if $$\lambda =o(\ln(More)
  • 1