# No free lunch theorems for optimization

@article{Wolpert1997NoFL, title={No free lunch theorems for optimization}, author={David H. Wolpert and William G. Macready}, journal={IEEE Trans. Evol. Comput.}, year={1997}, volume={1}, pages={67-82} }

A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to…

## 9,606 Citations

### No Free Lunch Theorem: A Review

- Computer ScienceApproximation and Optimization
- 2019

The objective of this paper is to go through the main research efforts that contributed to this research field, reveal the main issues, and disclose those points that are helpful in understanding the hypotheses, the restrictions, or even the inability of applying No Free Lunch theorems.

### Requirements for papers focusing on new or improved global optimization algorithms

- Computer Science
- 2016

The No-Free Lunch theorem (Wolpert and Macready 1997) provides an important limitation of global optimization algorithms that means that when a new or improved global optimization algorithm is proposed, it should be targeted towards a particular application or set of applications rather than tested against a fixed set of problems.

### Conditions that Obviate the No-Free-Lunch Theorems for Optimization

- Computer ScienceINFORMS J. Comput.
- 2007

This paper looks more closely at the NFL results and focuses on their implications for combinatorial problems typically faced by many researchers and practitioners, finding that only trivial subclasses of these problems fall under the NFL implications.

### Recent Results on No-Free-Lunch Theorems for Optimization

- MathematicsArXiv
- 2003

The sharpened No-Free-Lunch-theorem (NFL-theorem) states that the performance of all optimization algorithms averaged over any finite set F of functions is equal if and only if F is closed under…

### A Graphical Model for Evolutionary Optimization

- Computer ScienceEvolutionary Computation
- 2008

A statistical model of empirical optimization that admits the creation of algorithms with explicit and intuitively defined desiderata that provides a direct way to answer the traditionally difficult question of what algorithm is best matched to a particular class of functions.

### A Study of Some Implications of the No Free Lunch Theorem

- Computer ScienceEvoWorkshops
- 2008

It is proved that each set of functions based on the distance to a given optimal solution, among which trap functions, onemax or the recently introduced onemix functions, and the NK-landscapes are not c.u.p. and thus the thesis of the sharpened No Free Lunch Theorem does not hold for them.

### Searching for a Practical Evidence of the No Free Lunch Theorems

- Computer ScienceBioADIT
- 2004

Several test functions for which Random Search performs better than all other considered algorithms have been evolved and show the effectiveness of the proposed evolutionary approach.

### A framework for co-optimization algorithm performance and its application to worst-case optimization

- Computer ScienceTheor. Comput. Sci.
- 2015

### Simple Explanation of the No Free Lunch Theorem of Optimization

- Computer ScienceProceedings of the 40th IEEE Conference on Decision and Control (Cat. No.01CH37228)
- 2001

A framework is presented for conceptualizing optimization problems that leads to useful insights and a simple explanation of the No Free Lunch Theorem of Optimization.

### No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics

- Computer ScienceTheory and Principled Methods for the Design of Metaheuristics
- 2014

It is not likely that the preconditions of the NFL theorems are fulfilled for a problem class and thus differences between algorithms exist, therefore, tailored algorithms can exploit structure underlying the optimization problem.

## References

SHOWING 1-10 OF 22 REFERENCES

### No Free Lunch Theorems for Search

- Computer Science, Mathematics
- 1995

It is shown that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions, which allows for mathematical benchmarks for assessing a particular search algorithm's performance.

### What makes an optimization problem hard?

- Computer Science, MathematicsComplex.
- 1996

It is shown that according to this quantitiy, there is no distinction between optimization problems, and in this sense no problems are intrinsically harder than others.

### Optimization by Simulated Annealing

- PhysicsScience
- 1983

A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems.

### Branch-and-Bound Methods: A Survey

- Computer ScienceOper. Res.
- 1966

The essential features of the branch-and-bound approach to constrained optimization are described, and several specific applications are reviewed, including integer linear programming Land-Doig and Balas methods, nonlinear programming minimization of nonconvex objective functions, and the quadratic assignment problem Gilmore and Lawler methods.

### Tabu Search

- BusinessHandbook of Heuristics
- 2018

From the Publisher:
This book explores the meta-heuristics approach called tabu search, which is dramatically changing our ability to solve a hostof problems that stretch over the realms of resource…

### Tabu Search - Part II

- BusinessINFORMS J. Comput.
- 1990

The elements of staged search and structured move sets are characterized, which bear on the issue of finiteness, and new dynamic strategies for managing tabu lists are introduced, allowing fuller exploitation of underlying evaluation functions.

### The Existence of A Priori Distinctions Between Learning Algorithms

- Computer ScienceNeural Computation
- 1996

It is shown, loosely speaking, that for loss functions other than zero-one (e.g., quadratic loss), there are a priori distinctions between algorithms, and it is shown here that any algorithm is equivalent on average to its randomized version, and in this still has no first principles justification in terms of average error.

### Introduction to Random Fields

- Mathematics
- 1976

One means of generalizing denumerable stochastic processes {x n } with time parameter set ℕ = {0, 1, ... } is to consider random fields {x t }, where t takes on values in an arbitrary countable…

### The Lack of A Priori Distinctions Between Learning Algorithms

- Computer ScienceNeural Computation
- 1996

It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small.

### On Bias Plus Variance

- MathematicsNeural Computation
- 1997

This article presents several additive corrections to the conventional quadratic loss bias-plus-variance formula. One of these corrections is appropriate when both the target is not fixed (as in…