The Wisdom of Many in One Mind

@article{Herzog2009TheWO,
  title={The Wisdom of Many in One Mind},
  author={Stefan M. Herzog and Ralph Hertwig},
  journal={Psychological Science},
  year={2009},
  volume={20},
  pages={231 - 237}
}
The “wisdom of crowds” in making judgments about the future or other unknown events is well established. The average quantitative estimate of a group of individuals is consistently more accurate than the typical estimate, and is sometimes even the best estimate. Although individuals' estimates may be riddled with errors, averaging them boosts accuracy because both systematic and random errors tend to cancel out across individuals. We propose exploiting the power of averaging to improve… 

Figures from this paper

How the “wisdom of the inner crowd” can boost accuracy of confidence judgments.
TLDR
Simulation and analytical results show that irrespective of the type of item, averaging consistently improves confidence judgments, but maximizing is risky: It outperformed averaging only once items were answered correctly 60% of the time or more—a result that has not been established in prior work.
Extending the wisdom of crowds: How to harness the wisdom of the inner crowd
The wisdom-of-crowds effect describes how aggregating judgments of multiple individuals can lead to a more accurate judgment than that of the typical—or even best—individual. We investigated when
The wisdom of crowds in one mind : Experimental evidence on repeatedly asking oneself instead of others
Under the right circumstances, groups can be remarkably intelligent and statistical aggregates of individuals’ decisions can outperform individual’s and expert’s decisions. Examples of this wisdom of
Expertise and the Wisdom of Crowds: Whose Judgments to Trust and When
TLDR
While following the crowd produces good results, where a smaller number of reviews are available, taking expertise into account improves their usefulness and discrimination between shows.
PUBLISHED VERSION Welsh, Matthew Brian Expertise and the wisdom of crowds: Whose judgments to trust and when Building bridges
The Wisdom of Crowds describes the fact that aggregating a group’s estimate regarding unknown values is often a better strategy than selecting even an expert’s opinion. The efficacy of this strategy,
Think twice and then: combining or choosing in dialectical bootstrapping?
  • S. Herzog, R. Hertwig
  • Psychology
    Journal of experimental psychology. Learning, memory, and cognition
  • 2014
TLDR
This research found that participants were more likely to combine when they were instructed to actively contradict themselves, and were morelikely to combine as the size of the disagreement between 1st and 2nd estimate grew.
The Psychology of Second Guesses: Implications for the Wisdom of the Inner Crowd
TLDR
It is found that asking people to explicitly indicate whether their first Guess was too high or too low before making their second guess made people more likely to provide a second guess that was more extreme than their first guess.
The wisdom of the inner crowd in three large natural experiments.
TLDR
Large, real-world guessing competition datasets are used to test whether accuracy can be improved by aggregating repeated estimates by the same individual, and it is found that estimates do improve, but substantially less than with between-person aggregation.
The Psychology of Second Guesses: Implications for the Wisdom of the Inner Crowd
Prior research suggests that averaging two guesses from the same person can improve quantitative judgments, a phenomenon known as the “wisdom of the inner crowd.” In this article, we find that this
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 63 REFERENCES
Intuitions About Combining Opinions: Misappreciation of the Averaging Principle
TLDR
It is described how people may face few opportunities to learn the benefits of averaging and how misappreciating averaging contributes to poor intuitive strategies for combining estimates.
Intuitions About Combining Opinions: Misappreciation of the Averaging Principle
TLDR
It is described how people may face few opportunities to learn the benefits of averaging and how misappreciating averaging contributes to poor intuitive strategies for combining estimates.
Measuring the Crowd Within
TLDR
Measuring the crowd within: probabilistic representations within individuals finds any benefit of averaging two responses from one person would yield support for this hypothesis, which is consistent with such models that responses of many people are distributed probabilistically.
Strategies for revising judgment: how (and how well) people use others' opinions.
TLDR
The authors developed the probability, accuracy, redundancy (PAR) model and found that averaging was the more effective strategy across a wide range of commonly encountered environments and that despite this finding, people tend to favor the choosing strategy.
Strategies for revising judgment: how (and how well) people use others' opinions.
TLDR
The authors developed the probability, accuracy, redundancy (PAR) model and found that averaging was the more effective strategy across a wide range of commonly encountered environments and that despite this finding, people tend to favor the choosing strategy.
Intuitive Theories of Information: Beliefs about the Value of Redundancy
TLDR
The present experiments show that the preference for redundancy depends on one's intuitive theory of information, and lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized.
Intuitive Theories of Information: Beliefs about the Value of Redundancy
TLDR
The present experiments show that the preference for redundancy depends on one's intuitive theory of information, and lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized.
The effects of averaging subjective probability estimates between and within judges.
TLDR
Two studies test 3 predictions regarding averaging that follow from theorems based on a cognitive model of the judges and idealizations of the judgment situation, showing the extent to which they hold as the information conditions depart from the ideal and as J increases.
The effects of averaging subjective probability estimates between and within judges.
TLDR
Two studies test 3 predictions regarding averaging that follow from theorems based on a cognitive model of the judges and idealizations of the judgment situation, showing the extent to which they hold as the information conditions depart from the ideal and as J increases.
Overconfidence in interval estimates.
TLDR
The authors show that overconfidence in interval estimates can result from variability in setting interval widths, and that subjective intervals are systematically too narrow given the accuracy of one's information-sometimes only 40% as large as necessary to be well calibrated.
...
1
2
3
4
5
...