Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms

  title={Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms},
  author={Anne Auger and Olivier Teytaud},
This paper analyses extensions of No-Free-Lunch (NFL) theorems to countably infinite and uncountable infinite domains and investigates the design of optimal optimization algorithms.The original NFL theorem due to Wolpert and Macready states that, for finite search domains, all search heuristics have the same performance when averaged over the uniform distribution over all possible functions. For infinite domains the extension of the concept of distribution over all possible functions involves… 
A Probabilistic Reformulation of No Free Lunch: Continuous Lunches Are Not Free
A new formalization of probabilistic NFL is developed that is sufficiently expressive to prove the existence of NFL in large search domains, such as continuous spaces or function spaces, and fills an important gap in the study of performance of stochastic optimizers.
No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics
  • C. Igel
  • Computer Science
    Theory and Principled Methods for the Design of Metaheuristics
  • 2014
It is not likely that the preconditions of the NFL theorems are fulfilled for a problem class and thus differences between algorithms exist, therefore, tailored algorithms can exploit structure underlying the optimization problem.
No Free Lunch Theorem: A Review
The objective of this paper is to go through the main research efforts that contributed to this research field, reveal the main issues, and disclose those points that are helpful in understanding the hypotheses, the restrictions, or even the inability of applying No Free Lunch theorems.
When and why metaheuristics researchers can ignore “No Free Lunch” theorems
An argument against a common paraphrase of NFL findings—that algorithms must be specialised to problem domains to do well—after problematising the usually undefined term “domain” is presented, which offers a novel view of the real meaning of NFL.
Parameter Tuning by Simple Regret Algorithms and Multiple Simultaneous Hypothesis Testing
It is seen that for moderate numbers of arms, the possible improvement in terms of computational power required for statistical validation can't be more than linear as a function of the number of arms and provide a simple rule to check if the simple uniform al- gorithm (trivially parallel) is relevant.
Arbitrary function optimisation with metaheuristics
This work proposes an empirical framework, arbitrary function optimisation framework, that allows researchers to formulate conclusions independent of the benchmark problems that were actually addressed, as long as the context of the problem class is mentioned, and presents the first thorough empirical study on the no free lunch theorems.
A Review of No Free Lunch Theorems, and Their Implications for Metaheuristic Optimisation
It is shown that understanding the No Free Lunch theorems brings us to a position where the authors can ask about the specific dynamics of an optimisation algorithm, and how those dynamics relate to the properties of optimisation problems.
Free Lunch or no Free Lunch: that is not Just a Question?
  • Xin-She Yang
  • Computer Science
    Int. J. Artif. Intell. Tools
  • 2012
The recent results on no-free-lunch theorems and algorithm convergence, as well as their important implications for algorithm development in practice are discussed.
No-Free-Lunch theorems in the continuum


A No-Free-Lunch theorem for non-uniform distributions of target functions
The sharpened No-Free-Lunch-theorem (NFL-theorem) states that, regardless of the performance measure, the performance of all optimization algorithms averaged uniformly over any finite set F of
Optimization with randomized search heuristics - the (A)NFL theorem, realistic scenarios, and difficult functions
On the Futility of Blind Search: An Algorithmic View of No Free Lunch
It is suggested that the evolution of complex systems exhibiting high degrees of orderliness is not equivalent in difficulty to optimizing hard problems, and that the optimism in genetic algorithms as universal optimizers is not justified by natural evolution.
No free lunch theorems for optimization
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which
Perhaps Not a Free Lunch But At Least a Free Appetizer
It is argued why the scenario on which the No Free Lunch Theorem is based does not model real life optimization, and why optimization techniques differ in their efficiency.
Completely Derandomized Self-Adaptation in Evolution Strategies
This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation and reveals local and global search properties of the evolution strategy with and without covariance matrix adaptation.
Fundamental Limitations on Search Algorithms: Evolutionary Computing in Perspective
This paper extends results and draws out some of their implications for the design of search algorithms, and for the construction of useful representations, and focuses attention on tailoring alg- orithms and representations to particular problem classes by exploiting domain knowledge.
The No Free Lunch and problem description length
A duality result which relates functions being optimized to algorithms optimizing them is obtained and is used to sharpen the No Free Lunch theorem.
Random fields, analysis and synthesis
Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems"
Efficient Global Optimization of Expensive Black-Box Functions
This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule.