No free lunch theorems for optimization

@article{Wolpert1997NoFL,
  title={No free lunch theorems for optimization},
  author={David H. Wolpert and William G. Macready},
  journal={IEEE Trans. Evol. Comput.},
  year={1997},
  volume={1},
  pages={67-82}
}
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to… Expand
No Free Lunch Theorem: A Review
TLDR
The objective of this paper is to go through the main research efforts that contributed to this research field, reveal the main issues, and disclose those points that are helpful in understanding the hypotheses, the restrictions, or even the inability of applying No Free Lunch theorems. Expand
Requirements for papers focusing on new or improved global optimization algorithms
TLDR
The No-Free Lunch theorem (Wolpert and Macready 1997) provides an important limitation of global optimization algorithms that means that when a new or improved global optimization algorithm is proposed, it should be targeted towards a particular application or set of applications rather than tested against a fixed set of problems. Expand
Conditions that Obviate the No-Free-Lunch Theorems for Optimization
TLDR
This paper looks more closely at the NFL results and focuses on their implications for combinatorial problems typically faced by many researchers and practitioners, finding that only trivial subclasses of these problems fall under the NFL implications. Expand
Recent Results on No-Free-Lunch Theorems for Optimization
The sharpened No-Free-Lunch-theorem (NFL-theorem) states that the performance of all optimization algorithms averaged over any finite set F of functions is equal if and only if F is closed underExpand
A Graphical Model for Evolutionary Optimization
TLDR
A statistical model of empirical optimization that admits the creation of algorithms with explicit and intuitively defined desiderata that provides a direct way to answer the traditionally difficult question of what algorithm is best matched to a particular class of functions. Expand
A Study of Some Implications of the No Free Lunch Theorem
TLDR
It is proved that each set of functions based on the distance to a given optimal solution, among which trap functions, onemax or the recently introduced onemix functions, and the NK-landscapes are not c.u.p. and thus the thesis of the sharpened No Free Lunch Theorem does not hold for them. Expand
Searching for a Practical Evidence of the No Free Lunch Theorems
TLDR
Several test functions for which Random Search performs better than all other considered algorithms have been evolved and show the effectiveness of the proposed evolutionary approach. Expand
Simple Explanation of the No Free Lunch Theorem of Optimization
The No Free Lunch Theorem of Optimization (NFLT) is an impossibility theorem telling us that a general-purpose universal optimization strategy is impossible, and the only way one strategy canExpand
A framework for co-optimization algorithm performance and its application to worst-case optimization
TLDR
There exist algorithms that are aggregately-optimal for any budget and any starting point and also non-trivially strictly optimal for many budgets and starting points, and this framework can be used to bridge several fields of research, by allowing formalization of their various problems and views on performance. Expand
No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics
  • C. Igel
  • Computer Science
  • Theory and Principled Methods for the Design of Metaheuristics
  • 2014
TLDR
It is not likely that the preconditions of the NFL theorems are fulfilled for a problem class and thus differences between algorithms exist, therefore, tailored algorithms can exploit structure underlying the optimization problem. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 30 REFERENCES
No Free Lunch Theorems for Search
We show that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions. In particular, if algorithm A outperformsExpand
What makes an optimization problem hand?
TLDR
It is shown that according to this quantitiy, there is no distinction between optimization problems, and in this sense no problems are intrinsically harder than others. Expand
Optimization by Simulated Annealing
TLDR
A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. Expand
Branch-and-Bound Methods: A Survey
TLDR
The essential features of the branch-and-bound approach to constrained optimization are described, and several specific applications are reviewed, including integer linear programming Land-Doig and Balas methods, nonlinear programming minimization of nonconvex objective functions, and the quadratic assignment problem Gilmore and Lawler methods. Expand
Tabu Search - Part II
  • F. Glover
  • Mathematics, Computer Science
  • INFORMS J. Comput.
  • 1990
TLDR
The elements of staged search and structured move sets are characterized, which bear on the issue of finiteness, and new dynamic strategies for managing tabu lists are introduced, allowing fuller exploitation of underlying evaluation functions. Expand
The Existence of A Priori Distinctions Between Learning Algorithms
TLDR
It is shown, loosely speaking, that for loss functions other than zero-one (e.g., quadratic loss), there are a priori distinctions between algorithms, and it is shown here that any algorithm is equivalent on average to its randomized version, and in this still has no first principles justification in terms of average error. Expand
Introduction to Random Fields
One means of generalizing denumerable stochastic processes {x n } with time parameter set ℕ = {0, 1, ... } is to consider random fields {x t }, where t takes on values in an arbitrary countableExpand
The Lack of A Priori Distinctions Between Learning Algorithms
  • D. Wolpert
  • Computer Science, Mathematics
  • Neural Computation
  • 1996
TLDR
It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small. Expand
On Bias Plus Variance
  • D. Wolpert
  • Mathematics, Computer Science
  • Neural Computation
  • 1997
This article presents several additive corrections to the conventional quadratic loss bias-plus-variance formula. One of these corrections is appropriate when both the target is not fixed (as inExpand
The Lack of A Priori Distinctions Between Learning Algorithms
TLDR
It's true that n >> m and π(x) is uniform though, almost all of the terms in the sum have m' = m, so the summand doesn't vary drastically between d X 's of one m' and d X's of another. Expand
...
1
2
3
...