Conservation of Information in Search: Measuring the Cost of Success

@article{Dembski2009ConservationOI,
  title={Conservation of Information in Search: Measuring the Cost of Success},
  author={William A. Dembski and Robert Jackson Marks},
  journal={IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans},
  year={2009},
  volume={39},
  pages={1051-1061}
}
  • W. Dembski, R. Marks
  • Published 1 September 2009
  • Computer Science
  • IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans
Conservation of information theorems indicate that any search algorithm performs, on average, as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Computers, despite their speed in performing queries, are completely inadequate for resolving even moderately sized search problems… 

Figures from this paper

A General Theory of Information Cost Incurred by Successful Search
TLDR
This paper provides a general framework for understanding targeted search by defining the search matrix, which makes explicit the sources of information that can affect search progress and shows that information, like money, obeys strict accounting principles.
Biological Information New Perspectives (583 Pages)
TLDR
This paper provides a general framework for understanding targeted search by defining the search matrix, which makes explicit the sources of information that can affect search progress and shows that information, like money, obeys strict accounting principles.
The Search for a Search: Measuring the Information Cost of Higher Level Search
TLDR
The Horizontal No Free Lunch Theorem is proved, which shows that average relative performance of searches never exceeds unassisted or blind searches, and the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought.
Bernoulli's principle of insufficient reason and conservation of information in computer search
  • W. Dembski, R. Marks
  • Computer Science
    2009 IEEE International Conference on Systems, Man and Cybernetics
  • 2009
TLDR
This discussion leads to resolution of the seeming conflict between COI and the observation that some search algorithms perform well on a large class of problems.
Conservation of Information in Coevolutionary Searches
A number of papers show that the No Free Lunch theorem does not apply to coevolutionary search. This has been interpreted as meaning that, unlike classical full query searches, coevolutionary
Bounding the number of favorable functions in stochastic search
TLDR
This work defines favorable functions as those that allow an algorithm to locate a search target with higher probability than uniform random sampling with replacement, and bound the proportion of favorable functions for stochastic search methods, including genetic algorithms.
The famine of forte: Few search problems greatly favor your algorithm
  • George D. Montafiez
  • Computer Science
    2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
  • 2017
Casting machine learning as a type of search, we demonstrate that the proportion of problems that are favorable for a fixed algorithm is strictly bounded, such that no single algorithm can perform
Efficient per query information extraction from a Hamming oracle
TLDR
It is shown that evolutionary algorithms, although better than blind search, are a relatively inefficient method of information extraction and that a search for the search for an optimal tree search, as suggested by the previous work, becomes computationally intensive.
Evolutionary synthesis of nand logic: Dissecting a digital organism
TLDR
This work identifies sources of active information in Avida, a software program designed to search for logic functions using nand gates, and shows how removing stair steps deteriorates Avida's performance while removing deleterious instructions improves it.
The famine of forte: Few search problems greatly favor your algorithm
Casting machine learning as a type of search, we demonstrate that the proportion of problems that are favorable for a fixed algorithm is strictly bounded, such that no single algorithm can perform
...
...

References

SHOWING 1-10 OF 61 REFERENCES
What can we learn from No Free Lunch? a first attempt to characterize the concept of a searchable function
TLDR
This work operationally defines a technique for approaching the question of what makes a function searchable in practice and demonstrates the effectiveness of this technique by giving such a field and a corresponding algorithm; the algorithm performs better than random search for small values of this field.
Some information theoretic results on evolutionary optimization
  • T. M. English
  • Computer Science
    Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)
  • 1999
TLDR
The present work reduces the difference between theory and practice by allowing points to be revisited, reasoning about the set of visited points instead of the sequence, and considering the impact of bounded memory and revisited points upon optimizer performance.
On the Futility of Blind Search: An Algorithmic View of No Free Lunch
TLDR
It is suggested that the evolution of complex systems exhibiting high degrees of orderliness is not equivalent in difficulty to optimizing hard problems, and that the optimism in genetic algorithms as universal optimizers is not justified by natural evolution.
Evolution of biological information.
TLDR
This method is used to observe information gain in the binding sites for an artificial 'protein' in a computer simulation of evolution, demonstrating that information gain can occur by punctuated equilibrium.
Coevolutionary free lunches
TLDR
This paper presents a general framework covering most optimization scenarios and shows that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems.
NFL theorem is unusable on structured classes of problems
  • Benjamin Weinberg, E. Talbi
  • Computer Science
    Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753)
  • 2004
TLDR
It is proved that k-coloring problems respect such a notion of structure, for any k, which leads to the observation that the notion ofructure of optimization problems is missing in NFL use.
No free lunch theorems for optimization
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which
An Impatient Evolutionary Algorithm With Probabilistic Tabu Search for Unified Solution of Some NP-Hard Problems in Graph and Set Theory via Clique Finding
  • P. Guturu, R. Dantu
  • Computer Science
    IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)
  • 2008
TLDR
Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms.
Adaptive learning of hypergame situations using a genetic algorithm
TLDR
It is pointed out that examining the simulation results how to employ preference- and strategy-oriented information is critical to obtaining good performance in clarifying the nature's set of strategies and the outcomes most preferred by the nature.
Importance Sampling
Also called biased sampling , this is one of the variance-reducing techniques in Monte Carlo methods. A key issue in order to achieve small errors on the obtained result (for a given number of
...
...