A No-Free-Lunch theorem for non-uniform distributions of target functions

@article{Igel2004ANT,
  title={A No-Free-Lunch theorem for non-uniform distributions of target functions},
  author={C. Igel and Marc Toussaint},
  journal={Journal of Mathematical Modelling and Algorithms},
  year={2004},
  volume={3},
  pages={313-322}
}
The sharpened No-Free-Lunch-theorem (NFL-theorem) states that, regardless of the performance measure, the performance of all optimization algorithms averaged uniformly over any finite set F of functions is equal if and only if F is closed under permutation (c.u.p.). In this paper, we first summarize some consequences of this theorem, which have been proven recently: The number of subsets c.u.p. can be neglected compared to the total number of possible subsets. In particular, problem classes… 

Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms

TLDR
It is proved that the natural extension of NFL theorems, for the current formalization of probability, does not hold, but that a weaker form of NFL does hold, by stating the existence of non-trivial distributions of fitness leading to equal performances for all search heuristics.

Generalization of the no-free-lunch theorem

  • Albert Y. S. LamV. Li
  • Computer Science
    2009 IEEE International Conference on Systems, Man and Cybernetics
  • 2009
TLDR
Using results from the nature of search algorithms, several aspects of the original NFL Theorem are enhanced, including the properties of deterministic and probabilistic algorithms and an enumeration proof of the theorem.

What is important about the No Free Lunch theorems?

TLDR
The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally, and motivate a ``dictionary'' between supervised learning and improve blackbox optimization, which allows one to ``translate'' techniques from supervised learning into the domain of black box optimization, thereby strengthening blackbox optimized algorithms.

No-Free-Lunch theorems in the continuum

Free lunches on the discrete Lipschitz class

Beyond No Free Lunch: Realistic algorithms for arbitrary problem classes

TLDR
A new approach to reasoning about search algorithm performance is proposed, treating search algorithms as stochastic processes and thereby admitting revisiting; for this approach the authors need only make a simple assumption that search algorithms are applied for optimisation (i.e. maximisation or minimisation), rather than considering arbitrary performance measures.

Optimizing Monotone Functions Can Be Difficult

TLDR
This is the first time that a constant factor change of the mutation probability changes the run-time by more than constant factors.

Free Lunch for optimisation under the universal distribution

TLDR
A universal prior exists for which there is a free lunch, but where no particular class of functions is favoured over another, and upper and lower bounds on the size of the free lunch are proved.

Mutation Rate Matters Even When Optimizing Monotonic Functions

TLDR
This is the first time that a constant factor change of the mutation probability changes the runtime by more than a constant factors, and shows that if c<1, then the (1+1) EA finds the optimum of every such function in iterations.
...

References

SHOWING 1-10 OF 20 REFERENCES

Recent Results on No-Free-Lunch Theorems for Optimization

The sharpened No-Free-Lunch-theorem (NFL-theorem) states that the performance of all optimization algorithms averaged over any finite set F of functions is equal if and only if F is closed under

Two Broad Classes of Functions for Which a No Free Lunch Result Does Not Hold

TLDR
This work considers sets of functions with non-uniform associated probability distributions, and shows that a NFL result does not hold if the probabilities are assigned according either to description length or to a Solomonoff-Levin distribution.

No free lunch theorems for optimization

A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which

No Free Lunch Theorems for Search

TLDR
It is shown that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions, which allows for mathematical benchmarks for assessing a particular search algorithm's performance.

Remarks on a recent paper on the "no free lunch" theorems

TLDR
The present authors explore the issues raised in that paper including the presentation of a simpler version of the NFL proof in accord with a suggestion made explicitly by Koppen (2000) and implicitly by Wolpert and Macready (1997).

Evaluation of Evolutionary and Genetic Optimizers: No Free Lunch

TLDR
It is shown that the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions, and the result is generalized to an uncountable set of distributions.

Optimization is easy and learning is hard in the typical function

  • T. M. English
  • Computer Science
    Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512)
  • 2000
Elementary results in algorithmic information theory are invoked to show that almost all finite functions are highly random. That is, the shortest program generating a given function description is

A Free Lunch Proof for Gray versus Binary Encodings

TLDR
A measure of complexity is proposed that counts the number of local minima in any given problem representation and it is shown that reeected Gray code induce more optima than Binary over this special class of functions.