Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions

  title={Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions},
  author={Barbara A. Mellers and Eric Stone and Terry Murray and Angela Minster and Nick Rohrbaugh and Michael Bishop and Eva Chen and Joshua Baker and Yuan Hou and Michael Horowitz and Lyle H. Ungar and Philip E. Tetlock},
  journal={Perspectives on Psychological Science},
  pages={267 - 281}
Across a wide range of tasks, research has shown that people make poor probabilistic predictions of future events. Recently, the U.S. Intelligence Community sponsored a series of forecasting tournaments designed to explore the best strategies for generating accurate subjective probability estimates of geopolitical events. In this article, we describe the winning strategy: culling off top performers each year and assigning them into elite teams of superforecasters. Defying expectations of… 

Figures and Tables from this paper

Superforecasting: The Art and Science of Prediction
A New York Times Bestseller An Economist Best Book of 2015 "The most important book on decision making since Daniel Kahneman's Thinking, Fast and Slow." Jason Zweig, The Wall Street Journal Everyone
A Prediction Tournament Paradox
  • D. Aldous
  • Environmental Science
    The American Statistician
  • 2019
Abstract In a prediction tournament, contestants “forecast” by asserting a numerical probability for each of (say) 100 future real-world events. The scoring system is designed so that (regardless of
Bias, Information, Noise: The BIN Model of Forecasting
A Bayesian BIN model (Bias, Information, Noise) is proposed for disentangling the underlying processes that enable forecasters and forecasting methods to improve – either by tamping down bias and noise in judgment or by ramping up the efficient extraction of valid information from the environment.
Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments
The heuristics-and-biases research program highlights reasons for expecting people to be poor intuitive forecasters. This article tests the power of a cognitive-debiasing training module (“CHAMPS
From discipline-centered rivalries to solution-centered science: Producing better probability estimates for policy makers.
From 2011 to 2015, the U.S. intelligence community sponsored a series of forecasting tournaments that challenged university-based researchers to invent measurably better methods of forecasting
The Value of Precision in Probability Assessment: Evidence from a Large-Scale Geopolitical Forecasting Tournament
Scholars, practitioners, and pundits often leave their assessments of uncertainty vague when debating foreign policy, arguing that clearer probability estimates would provide arbitrary detail instead
Quantifying machine influence over human forecasters
This work presents a model that can be used to estimate the trust that humans assign to a machine, and uses forecasts made in the absence of machine models as prior beliefs to quantify the weights placed on the models.
Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions
A dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States is explored, and a comprehensive crowd construction framework is developed that allows for the formal description and application of crowdsourcing to real-world data.


Psychological Strategies for Winning a Geopolitical Forecasting Tournament
Support is found for three psychological drivers of accuracy: training, teaming, and tracking in a 2-year geopolitical forecasting tournament that produced the best forecasts 2 years in a row.
Distilling the Wisdom of Crowds: Prediction Markets versus Prediction Polls
We report the results of the first large-scale, long-term, experimental test between two crowd sourcing methods – prediction markets and prediction polls. More than 2,400 participants made forecasts
The psychology of intelligence analysis: drivers of prediction accuracy in world politics.
A profile of the best forecasters is developed; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness; they had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments.
Probability aggregation in time-series: Dynamic hierarchical modeling of sparse expert beliefs
This paper presents a hierarchical model that takes into account the expert's level of self-reported expertise and produces aggregate probabilities that are sharp and well calibrated both in- and out-of-sample.
The Great Rationality Debate
For better or for worse, and opinions are divided on this score, the research program of Daniel Kahneman and the late Amos Tversky now represents psychology’s leading intellectual export to the wider
Pseudodiagnosticity in judgment under uncertainty
Calibration of probabilities: the state of the art to 1980
From the subjectivist point of view (de Finetti, 1937/1964), a probability is a degree of belief in a proposition. It expresses a purely internal state; there is no “right,” “correct,” or “objective”
Thinking fast and slow.
  • N. McGlynn
  • Psychology
    Australian veterinary journal
  • 2014
Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of the authors' brain’s wiring.
Evidential impact of base rates
In many contexts people are required to assess the probability of some target event (e.g., the diagnosis of a patient or the sales of a textbook) on the basis of (a) the base-rate frequency of the