• Corpus ID: 12890367

No Free Lunch Theorems for Search

@inproceedings{Wolpert1995NoFL,
  title={No Free Lunch Theorems for Search},
  author={David H. Wolpert and William G. Macready},
  year={1995}
}
We show that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions. In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other functions where B outperforms A. Starting from this we analyze a number of the other a priori characteristics of the search problem, like its geometry and its information-theoretic aspects. This analysis allows… 
Searching for a Practical Evidence of the No Free Lunch Theorems
TLDR
Several test functions for which Random Search performs better than all other considered algorithms have been evolved and show the effectiveness of the proposed evolutionary approach.
What can we learn from No Free Lunch? a first attempt to characterize the concept of a searchable function
TLDR
This work operationally defines a technique for approaching the question of what makes a function searchable in practice and demonstrates the effectiveness of this technique by giving such a field and a corresponding algorithm; the algorithm performs better than random search for small values of this field.
No free lunch theorems for optimization
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which
No-Free-Lunch theorems and the diversity of algorithms
In this paper, the no-free-lunch theorem is extended to subsets of functions. It is shown that for algorithm a performing better on a set of functions than algorithm b, three has to be another subset
Free lunches on the discrete Lipschitz class
No Free Lunch Theorem: A Review
TLDR
The objective of this paper is to go through the main research efforts that contributed to this research field, reveal the main issues, and disclose those points that are helpful in understanding the hypotheses, the restrictions, or even the inability of applying No Free Lunch theorems.
Algorithms' local potential-breakfast included?
  • N. Weicker, Karsten Weicker
  • Computer Science
    Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)
  • 1999
TLDR
Under certain assumptions concerning the locality of the algorithms it is shown that no local (non-adapting) search algorithm is superior to all other algorithms for all possible populations.
Representation, Search and Genetic Algorithms
TLDR
It is proved that for local neighborhood search on problems of bounded complexity, where complexity is measured In terms of number of basins of attraction in the search space a Gray coded representation is better than Binary in the sense that on average it induces fewer minima in a Hamming distance 1 search neighborhood.
No free lunch theorems for optimization — Source link
— A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which
Fundamental Limitations on Search Algorithms: Evolutionary Computing in Perspective
TLDR
This paper extends results and draws out some of their implications for the design of search algorithms, and for the construction of useful representations, and focuses attention on tailoring alg- orithms and representations to particular problem classes by exploiting domain knowledge.
...
...

References

SHOWING 1-10 OF 27 REFERENCES
Dynamic Hill Climbing: Overcoming the limitations of optimization techniques
This paper describes a novel search algorithm, called dynamic hill climbing, that borrows ideas from genetic algorithms and hill climbing techniques. Unlike both genetic and hill climbing algorithms,
OFF-TRAINING SET ERROR AND A PRIORI DISTINCTIONS BETWEEN LEARNING ALGORITHMS
TLDR
It is shown, loosely speaking, that for any two algorithms A and B, there are as many targets (or priors over targets) for which A has lower expected OTS error than B as vice-versa, for loss functions like zero-one loss.
A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
TLDR
The results indicate that for real-word datasets similar to the authors', the best method to use for model selection is ten fold stratified cross validation even if computation power allows using more folds.
Adaptation in natural and artificial systems
TLDR
Names of founding work in the area of Adaptation and modiication, which aims to mimic biological optimization, and some (Non-GA) branches of AI.
Elements of Information Theory
TLDR
The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Adaptive Simulated Annealing (ASA)
TLDR
Adaptive Simulated Annealing is a C-language code developed to statistically find the best global fit of a nonlinear constrained non-convex cost-function over aD-dimensional space.
The traveling salesman: computational solutions for TSP applications
TLDR
A Case Study: TSPs in Printed Circuit Board Production and Practical TSP Solving.
Operations Research
Introduction to Operations ResearchBy A. Kaufmann, and R. Faure. Translated by Henry C. Sneyd. (Mathematics in Science and Engineering, Vol. 47.) Pp. xi + 300. (Academic Press: New York and London,
Alpha, Evidence, and the Entropic Prior
TLDR
The correct entropic prior is computed by marginalization of alpha, and the approximations used to restore the famous “Susie” image may have questionable aspects.
Statistical Decision Theory and Bayesian Analysis
An overview of statistical decision theory, which emphasizes the use and application of the philosophical ideas and mathematical structure of decision theory. The text assumes a knowledge of basic
...
...