A solution to the single-question crowd wisdom problem

@article{Prelec2017AST,
  title={A solution to the single-question crowd wisdom problem},
  author={Drazen Prelec and H. Sebastian Seung and John McCoy},
  journal={Nature},
  year={2017},
  volume={541},
  pages={532-535}
}
Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a… 
Surprisingly Popular Voting Recovers Rankings, Surprisingly!
TLDR
It is experimentally demonstrated that even a little prediction information helps surprisingly popular voting outperform classical approaches and explore practical techniques for extending the surprisingly popular algorithm to ranked voting by partial votes and predictions and designing robust aggregation rules.
Aggregated knowledge from a small number of debates outperforms the wisdom of large crowds
The aggregation of many independent estimates can outperform the most accurate individual judgement1–3. This centenarian finding1,2, popularly known as the 'wisdom of crowds'3, has been applied to
Deliberation increases the wisdom of crowds
TLDR
It is shown that deliberation and discussion improves collective wisdom, and averaging information from independent debates is a highly effective strategy for harnessing the authors' collective knowledge.
Machine Truth Serum
TLDR
This paper presents two machine learning aided methods which aim to reveal the truth when it is minority instead of majority who has the true answer, and shows that better classification performance can be obtained compared to always trusting the majority voting.
Hyper Questions: Unsupervised Targeting of a Few Experts in Crowdsourcing
TLDR
This paper focuses on an important class of answer aggregation problems in which majority voting fails and proposes the concept of hyper questions to devise effective aggregation methods, which are more likely to provide correct answers to all of the single questions included in a hyper question than non-experts.
Wisdom of Crowd: Comparison of the CWM, Simple Average and Surprisingly Popular Answer Method
TLDR
It is concluded that the SPA is the most appropriate for general use as it has less requirements regarding the type and the number of the questions than the CWM and its good performance is more robust.
Rescuing Collective Wisdom when the Average Group Opinion Is Wrong
TLDR
It is indicated that in the ideal case, there should be a matching between the aggregation procedure and the nature of the knowledge distribution, correlations and associated error costs, which leads to a discussion of open frontiers in the domain of knowledge aggregation and collective intelligence in general.
Extracting the Wisdom from the Crowd: A Comparison of Approaches to Aggregating Collective Intelligence
TLDR
It is found that the average-confidence approach i) provides the highest percentage of correctly identified answers across different categories of general knowledge questions and ii) is better suited to identify high quality ideas.
Studying the “Wisdom of Crowds” at Scale
TLDR
It is found that crowd performance is generally more consistent than that of individuals; as a result, the crowd does considerably better than individuals when performance is computed on a full set of questions within a domain.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 29 REFERENCES
Identifying Expertise to Extract the Wisdom of Crowds
TLDR
A new measure of contribution is proposed to assess the judges' performance relative to the group and positive contributors are used to build a weighting model for aggregating forecasts, showing that the model derives its power from identifying experts who consistently outperform the crowd.
How social influence can undermine the wisdom of crowd effect
TLDR
This work demonstrates by experimental evidence that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks.
Intuitive Biases in Choice versus Estimation: Implications for the Wisdom of Crowds
Although researchers have documented many instances of crowd wisdom, it is important to know whether some kinds of judgments may lead the crowd astray, whether crowds' judgments improve with feedback
The Wisdom of the Crowd in Combinatorial Problems
TLDR
Case studies suggest that the wisdom of the crowd phenomenon might be broadly applicable to problem-solving and decision-making situations that go beyond the estimation of single numbers.
Infotopia: How Many Minds Produce Knowledge
This book explores the human potential to pool widely dispersed information, and to use that knowledge to improve both our institutions and our lives. Various methods for aggregating information are
Tapping into the Wisdom of the Crowd—with Confidence
TLDR
The subjective confidence of individuals in groups can be a valid predictor of accuracy in decision-making tasks and the value of individual confidence in group decision- making is explored.
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Inferring Expertise in Knowledge and Prediction Ranking Tasks
TLDR
It is shown that the model-based measure of expertise outperforms self-report measures, taken both before and after completing the ordering of items, in terms of correlation with the actual accuracy of the answers.
Use (and abuse) of expert elicitation in support of decision making for public policy
  • M. G. Morgan
  • Computer Science
    Proceedings of the National Academy of Sciences
  • 2014
TLDR
Expert elicitation should build on and use the best available research and analysis and be undertaken only when the state of knowledge will remain insufficient to support timely informed assessment and decision making.
...
1
2
3
...