• Corpus ID: 13487222

Partial Information Framework: Aggregating Estimates from Diverse Information Sources

@article{Satop2015PartialIF,
  title={Partial Information Framework: Aggregating Estimates from Diverse Information Sources},
  author={Ville A. Satop{\"a}{\"a} and Shane T. Jensen and Robin Pemantle and Lyle H. Ungar},
  journal={arXiv: Methodology},
  year={2015}
}
Prediction polling is an increasingly popular form of crowdsourcing in which multiple participants estimate the probability or magnitude of some future event. These estimates are then aggregated into a single forecast. Historically, randomness in scientific estimation has been generally assumed to arise from unmeasured factors which are viewed as measurement noise. However, when combining subjective estimates, heterogeneity stemming from differences in the participants' information is often… 

Figures and Tables from this paper

Partial Information Framework: Basic Theory and Applications

This dissertation shows that measurement error is not appropriate for modeling forecast heterogeneity and then introduces information diversity as a more appropriate yet fundamentally different alternative and the Gaussian partial information model, a very close yet practical specification of the framework.

Can Investors Profit from Information Diveristy? The Wisdom of Crowds in Security Analyst Recommendations

There is heterogeneity in individual forecasts of any variable — inflation, corporate earnings, etc. The standard consensus estimate takes a simple average of individual forecasts, implicitly

Combining and Extremizing Real-Valued Forecasts

This paper proposes a linear extremization technique for improving the weighted average of real-valued forecasts and the resulting more extreme version of the weightedAverage exhibits many properties of optimal aggregation.

References

SHOWING 1-10 OF 25 REFERENCES

Modeling Probability Forecasts via Information Diversity

A novel framework that uses partially overlapping information sources is proposed and applied to the task of aggregating the probabilities given by a group of forecasters who predict whether an event will occur or not and gives a more principled understanding of the historically ad hoc practice of extremizing average forecasts.

Combining probability forecasts

Summary.  Linear pooling is by far the most popular method for combining probability forecasts. However, any non‐trivial weighted average of two or more distinct, calibrated probability forecasts is

Human Judgement is Heavy Tailed: Empirical Evidence and Implications for the Aggregation of Estimates and Forecasts

How frequent are large disagreements in human judgment? The substantial literature relating to expert assessments of real-valued quantities and their aggregation almost universally assumes that

Prediction without markets

It is found that the relative advantage of prediction markets is surprisingly small, as measured by squared error, calibration, and discrimination, and as policy makers consider adoption, costs should be weighed against potentially modest benefits.

Probability Elicitation, Scoring Rules, and Competition Among Forecasters

It is shown how a decision maker can revise probabilities of an event after receiving reported probabilities from competitive forecasters and note that the strategy of exaggerating probabilities can make well-calibrated forecasters appear to be overconfident.

The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions

It is found that teams and prediction markets systematically outperformed averages of individual forecasters, that training forecasters helps, and that the exact form of how predictions are combined has a large effect on overall prediction accuracy.

Interpreted and generated signals

Optimal linear opinion pools

Consider a decision problem involving a group of m Bayesians in which each member reports his/her posterior distribution for some random variable \theta . The individuals all share a common prior

Psychological Strategies for Winning a Geopolitical Forecasting Tournament

Support is found for three psychological drivers of accuracy: training, teaming, and tracking in a 2-year geopolitical forecasting tournament that produced the best forecasts 2 years in a row.

Why Are Experts Correlated? Decomposing Correlations Between Judges

We derive an analytic model of the inter-judge correlation as a function of five underlying parameters. Inter-cue correlation and the number of cues capture our assumptions about the environment,