No Free Lunch Theorem: A Review

  title={No Free Lunch Theorem: A Review},
  author={Stavros P. Adam and Stamatios-Aggelos N. Alexandropoulos and Panos M. Pardalos and Michael N. Vrahatis},
  journal={Approximation and Optimization},
The “No Free Lunch” theorem states that, averaged over all optimization problems, without re-sampling, all optimization algorithms perform equally well. Optimization, search, and supervised learning are the areas that have benefited more from this important theoretical concept. Formulation of the initial No Free Lunch theorem, very soon, gave rise to a number of research works which resulted in a suite of theorems that define an entire research field with significant results in other scientific… 

The Implications of the No-Free-Lunch Theorems for Meta-induction

The NFL theorems are reviewed, emphasizing that they do not only concern the case where there is a uniform prior — they prove that there are “as many priors” for which any induction algorithm A out-generalizes some induction algorithm B as vice-versa.

Learning How to Optimize Black-Box Functions With Extreme Limits on the Number of Function Evaluations

This work proposes an original method that uses established approaches to propose a set of points for each batch and then down-selects from these candidate points to the number of trials that can be run in parallel, which achieves an average reduction of 50% of normalized cost.

The Evidence of the “No Free Lunch” Theorems and the Theory of Complexity in Business Artificial Intelligence

The evidence of the “No-Free-Lunch” (NFL) theorems is proposed to understand ML use’s applicability in business organizations.

Reformulation of the No-Free-Lunch Theorem for Entangled Data Sets

It is shown that entangled datasets lead to an apparent violation of the (classical) NFL theorem, and this Letter establishes that entanglement is a commodity in quantum machine learning.

Benchmarking in Optimization: Best Practice and Open Issues

The article discusses eight essential topics in benchmarking: clearly stated goals, well-specified problems, suitable algorithms, adequate performance measures, thoughtful analysis, effective and efficient designs, comprehensible presentations, and guaranteed reproducibility.

Are Humans Bayesian in the Optimization of Black-Box Functions?

This paper focuses on Bayesian Optimization and analyse experimentally how it compares to humans while searching for the maximum of an unknown 2D function and confirms that Gaussian Processes provide a general model to explain different patterns of learning enabled search and optimization in humans.

Global Optimisation through Hyper-Heuristics: Unfolding Population-Based Metaheuristics

This work proposes a heuristic-based solver model for continuous optimisation problems by extending the existing concepts present in the literature and utilised a hyper-heuristic based on Simulated Annealing as a high-level strategy.

Automated Design of Unfolded Metaheuristics and the Effect of Population Size

This work proposes a methodology for designing heuristic-based procedures to solve continuous optimisation problems and study how the population size affects its performance, using the well-known Simulated Annealing algorithm as a hyper-heuristic.



No free lunch theorems for optimization

A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which

The No Free Lunch and problem description length

A duality result which relates functions being optimized to algorithms optimizing them is obtained and is used to sharpen the No Free Lunch theorem.

Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms

It is proved that the natural extension of NFL theorems, for the current formalization of probability, does not hold, but that a weaker form of NFL does hold, by stating the existence of non-trivial distributions of fitness leading to equal performances for all search heuristics.

No Free Lunch Theorems for Search

It is shown that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions, which allows for mathematical benchmarks for assessing a particular search algorithm's performance.

Optimization, block designs and No Free Lunch theorems

A no-free-lunch framework for coevolution

A novel framework for analyzing No-Free-Lunch like results for classes of coevolutionary algorithms based upon the solution concept which they implement and a new instance of free lunches in coevolved which demonstrates the applicability of the framework.

Remarks on a recent paper on the "no free lunch" theorems

The present authors explore the issues raised in that paper including the presentation of a simpler version of the NFL proof in accord with a suggestion made explicitly by Koppen (2000) and implicitly by Wolpert and Macready (1997).

Beyond No Free Lunch: Realistic algorithms for arbitrary problem classes

A new approach to reasoning about search algorithm performance is proposed, treating search algorithms as stochastic processes and thereby admitting revisiting; for this approach the authors need only make a simple assumption that search algorithms are applied for optimisation (i.e. maximisation or minimisation), rather than considering arbitrary performance measures.

Coevolutionary free lunches

This paper presents a general framework covering most optimization scenarios and shows that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems.

The no free lunch theorem and the human-machine interface

  • Y. Ho
  • Computer Science
  • 1999
The twin purposes of the article are to explore the implications of NFL and to address the proper allocation of natural and computational intelligence in optimization problem solving.