Forecasting Tournaments

@article{Tetlock2014ForecastingT,
  title={Forecasting Tournaments},
  author={Philip E. Tetlock and Barbara A. Mellers and Nick Rohrbaugh and Eva Chen},
  journal={Current Directions in Psychological Science},
  year={2014},
  volume={23},
  pages={290 - 295}
}
Forecasting tournaments are level-playing-field competitions that reveal which individuals, teams, or algorithms generate more accurate probability estimates on which topics. This article describes a massive geopolitical tournament that tested clashing views on the feasibility of improving judgmental accuracy and on the best methods of doing so. The tournament’s winner, the Good Judgment Project, outperformed the simple average of the crowd by (a) designing new forms of cognitive-debiasing… 
Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments
The heuristics-and-biases research program highlights reasons for expecting people to be poor intuitive forecasters. This article tests the power of a cognitive-debiasing training module (“CHAMPS
Distilling the Wisdom of Crowds: Prediction Markets vs. Prediction Polls
TLDR
Team prediction polls outperformed prediction markets when poll forecasts were aggregated with algorithms using temporal decay, performance weighting and recalibration, and the biggest advantage of prediction polls occurred at the start of long-duration questions.
Validating the Contribution-Weighted Model: Robustness and Cost-Benefit Analyses
TLDR
Results from a multiyear, geopolitical forecasting tournament are used to highlight the ability of the contribution weighted model to capture and exploit expertise and to document the model’s robustness using probability judgments from early, middle, and late phases of the forecasting period.
Forecasting the Accuracy of Forecasters from Properties of Forecasting Rationales
TLDR
Methods from natural language processing (NLP) and computational text analysis are adapted to identify distinctive reasoning strategies in the rationales of top forecasters, including cognitive styles that gauge tolerance of clashing perspectives and efforts to blend them into coherent conclusions.
Bringing probability judgments into policy debates via forecasting tournaments
TLDR
It is suggested that tournaments may hold even greater potential as tools for depolarizing political debates and resolving policy disputes and that tournaments are a useful tool for generating knowledge.
Effects of Choice Restriction on Accuracy and User Experience in an Internet-Based Geopolitical Forecasting Task
TLDR
In two studies involving pools of novice forecasters recruited online, there is no evidence that limiting forecaster choice adversely affected forecasting accuracy or subjective experience, suggesting that in large-scale forecasting tournaments, it may be possible to implement choice-limiting triage strategies without sacrificing individual accuracy and motivation.
Pre-screening workers to overcome bias amplification in online labour markets
TLDR
It is found that systematic biases in crowdsourced answers are not as prevalent as anticipated, but when they occur, biases are amplified with increasing group size, as predicted by the Condorcet Jury Theorem.
Quantifying machine influence over human forecasters
TLDR
This work presents a model that can be used to estimate the trust that humans assign to a machine, and uses forecasts made in the absence of machine models as prior beliefs to quantify the weights placed on the models.
Mind the gap between demand and supply : A behavioral perspective on demand forecasting
Prior academic research has recognized human judgment as an indispensable decisionaid in demand forecasting although it is subject to a number of biases. Therefore, it is important to understand
Rethinking the training of intelligence analysts
TLDR
This work proposes a new approach to analytic training, adopting scientifically validated content and regularly testing training to avoid institutionalizing new dogmas, and incentivizing analysts to view training guidelines as means to the end of improved accuracy, not an end in itself.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 56 REFERENCES
Psychological Strategies for Winning a Geopolitical Forecasting Tournament
TLDR
Support is found for three psychological drivers of accuracy: training, teaming, and tracking in a 2-year geopolitical forecasting tournament that produced the best forecasts 2 years in a row.
Forecast aggregation via recalibration
TLDR
This paper develops and compares a number of models for calibrating and aggregating forecasts that exploit the well-known fact that individuals exhibit systematic biases during judgment and elicitation.
Identifying Expertise to Extract the Wisdom of Crowds
TLDR
A new measure of contribution is proposed to assess the judges' performance relative to the group and positive contributors are used to build a weighting model for aggregating forecasts, showing that the model derives its power from identifying experts who consistently outperform the crowd.
Luck versus Forecast Ability: Determinants of Trader Performance in Futures Markets
Statistical techniques are used to demonstrate that the fortunes of individual futures traders are determined by luck, not forecast ability. Even though a large number of traders appear to exhibit
Principles of forecasting
TLDR
A review of the evidence showed that role playing was effective in matching results for seven of eight experiments and was correct for 56 percent of 143 predictions, while unaided expert opinions were correct for 16 percent of 172 predictions.
Two Reasons to Make Aggregated Probability Forecasts More Extreme
TLDR
It is shown that the same transformation function can approximately eliminate both distorting effects with different parameters for the mean and the median, and how, in principle, use of the median can help distinguish the two effects.
Probability aggregation in time-series: Dynamic hierarchical modeling of sparse expert beliefs
TLDR
This paper presents a hierarchical model that takes into account the expert's level of self-reported expertise and produces aggregate probabilities that are sharp and well calibrated both in- and out-of-sample.
The psychology of intelligence analysis: drivers of prediction accuracy in world politics.
TLDR
A profile of the best forecasters is developed; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness; they had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments.
Conditions for intuitive expertise: a failure to disagree.
TLDR
Evaluating the likely quality of an intuitive judgment requires an assessment of the predictability of the environment in which the judgment is made and of the individual's opportunity to learn the regularities of that environment.
Probabilistic Coherence Weighting for Optimizing Expert Forecasts
TLDR
An approach for eliciting extra probability judgments to adjust the judgments of each individual forecaster, and assign weights to the judgments to aggregate over the entire set of forecasters is described.
...
1
2
3
4
5
...