A No-Free-Lunch theorem for non-uniform distributions of target functions

  title={A No-Free-Lunch theorem for non-uniform distributions of target functions},
  author={C. Igel and Marc Toussaint},
  journal={Journal of Mathematical Modelling and Algorithms},
The sharpened No-Free-Lunch-theorem (NFL-theorem) states that, regardless of the performance measure, the performance of all optimization algorithms averaged uniformly over any finite set F of functions is equal if and only if F is closed under permutation (c.u.p.). In this paper, we first summarize some consequences of this theorem, which have been proven recently: The number of subsets c.u.p. can be neglected compared to the total number of possible subsets. In particular, problem classes… Expand
Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms
It is proved that the natural extension of NFL theorems, for the current formalization of probability, does not hold, but that a weaker form of NFL does hold, by stating the existence of non-trivial distributions of fitness leading to equal performances for all search heuristics. Expand
Generalization of the no-free-lunch theorem
  • Albert Y. S. Lam, V. Li
  • Mathematics, Computer Science
  • 2009 IEEE International Conference on Systems, Man and Cybernetics
  • 2009
Using results from the nature of search algorithms, several aspects of the original NFL Theorem are enhanced, including the properties of deterministic and probabilistic algorithms and an enumeration proof of the theorem. Expand
What is important about the No Free Lunch theorems?
The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally, and motivate a ``dictionary'' between supervised learning and improve blackbox optimization, which allows one to ``translate'' techniques from supervised learning into the domain of black box optimization, thereby strengthening blackbox optimized algorithms. Expand
No-Free-Lunch theorems in the continuum
This paper provides another approach, which is simpler, requires less assumptions, relates the discrete and continuum cases, and believes that clarifies the role of the cardinality and structure of the domain. Expand
Free lunches on the discrete Lipschitz class
It is concluded that there exist algorithms outperforming random search on the discrete Lipschitz class in both theoretical and practical aspects and indicates that the effectiveness of search heuristics may not be universal but still general in some broad sense. Expand
Beyond No Free Lunch: Realistic algorithms for arbitrary problem classes
A new approach to reasoning about search algorithm performance is proposed, treating search algorithms as stochastic processes and thereby admitting revisiting; for this approach the authors need only make a simple assumption that search algorithms are applied for optimisation (i.e. maximisation or minimisation), rather than considering arbitrary performance measures. Expand
Free Lunch for optimisation under the universal distribution
A universal prior exists for which there is a free lunch, but where no particular class of functions is favoured over another, and upper and lower bounds on the size of the free lunch are proved. Expand
No free lunch, bayesian inference, and utility: a decision-theoretic approach to optimization
Existing approaches to continuous optimization are essentially mechanisms for deciding which locations should be sampled in order to obtain information about a target function's global optimum. TheseExpand
A Review of No Free Lunch Theorems, and Their Implications for Metaheuristic Optimisation
It is shown that understanding the No Free Lunch theorems brings us to a position where the authors can ask about the specific dynamics of an optimisation algorithm, and how those dynamics relate to the properties of optimisation problems. Expand
Coevolutionary free lunches
This paper presents a general framework covering most optimization scenarios and shows that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. Expand


Recent Results on No-Free-Lunch Theorems for Optimization
The sharpened No-Free-Lunch-theorem (NFL-theorem) states that the performance of all optimization algorithms averaged over any finite set F of functions is equal if and only if F is closed underExpand
On Classes of Functions for which No Free Lunch Results Hold
The main result of this paper is proven that the fraction of subsets that are c.u.p. is negligibly small, which means that classes of objective functions resulting from important classes of real-world problems are likely not to be c. Expand
Two Broad Classes of Functions for Which a No Free Lunch Result Does Not Hold
This work considers sets of functions with non-uniform associated probability distributions, and shows that a NFL result does not hold if the probabilities are assigned according either to description length or to a Solomonoff-Levin distribution. Expand
Optimization with randomized search heuristics - the (A)NFL theorem, realistic scenarios, and difficult functions
An Almost No Free Lunch (ANFL) theorem shows that for each function which can be optimized efficiently by a search heuristic there can be constructed many related functions where the same heuristic is bad. Expand
No free lunch theorems for optimization
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented whichExpand
No Free Lunch Theorems for Search
We show that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions. In particular, if algorithm A outperformsExpand
Remarks on a recent paper on the "no free lunch" theorems
The present authors explore the issues raised in that paper including the presentation of a simpler version of the NFL proof in accord with a suggestion made explicitly by Koppen (2000) and implicitly by Wolpert and Macready (1997). Expand
Evaluation of Evolutionary and Genetic Optimizers: No Free Lunch
It is shown that the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions, and the result is generalized to an uncountable set of distributions. Expand
Optimization is easy and learning is hard in the typical function
  • T. M. English
  • Computer Science
  • Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)
  • 2000
Elementary results in algorithmic information theory are invoked to show that almost all finite functions are highly random. That is, the shortest program generating a given function description isExpand
A Free Lunch Proof for Gray versus Binary Encodings
A measure of complexity is proposed that counts the number of local minima in any given problem representation. A special class of functions with the maximum possible number of optima is also deened.Expand