On the Importance of Random Error in the Study of Probability Judgment. Part I: New Theoretical Developments

@article{Budescu1997OnTI,
  title={On the Importance of Random Error in the Study of Probability Judgment. Part I: New Theoretical Developments},
  author={David V. Budescu and Ido Erev and Thomas S. Wallsten},
  journal={Journal of Behavioral Decision Making},
  year={1997},
  volume={10},
  pages={157-171}
}
Erev, Wallsten, and Budescu (1994) demonstrated that over- and underconfidence can be observed simultaneously in judgment studies, as a function of the method used to analyze the data. They proposed a general model to account for this apparent paradox, which assumes that overt responses represent true judgments perturbed by random error. To illustrate that the model reproduces the pattern of results, they assumed perfectly calibrated true opinions and a particular form (log-odds plus normally… 

Should observed overconfidence be dismissed as a statistical artifact? Critique of Erev, Wallsten, and Budescu (1994)

It is argued in the present article that decomposing over- and underconfidence into true and artifactual components is inappropriate because the mistake stems from giving primacy to ambiguously defined model constructions (true judgments) over observed data.

Rejoinder: error in confidence judgments

People are sometimes overconfident in their decisions, at least in laboratory settings. Or are they? Erev, Wallsten, and Budescu (1994) provided a demonstration that error could produce an

A study of expert overconfidence

Representativeness revisited: Attribute substitution in intuitive judgment.

The program of research now known as the heuristics and biases approach began with a survey of 84 participants at the 1969 meetings of the Mathematical Psychology Society and the American

Averaging probability judgments: Monte Carlo analyses of asymptotic diagnostic value

Wallsten et al. (1997) developed a general framework for assessing the quality of aggregated probability judgments. Within this framework they presented a theorem regarding the effects of pooling

Overconfidence in interval estimates.

The authors show that overconfidence in interval estimates can result from variability in setting interval widths, and that subjective intervals are systematically too narrow given the accuracy of one's information-sometimes only 40% as large as necessary to be well calibrated.

Overconfidence: It Depends on How, What, and Whom You Ask.

Determining why some people, some domains, and some types of judgments are more prone to overconfidence will be important to understanding how confidence judgments are made.

The effects of averaging subjective probability estimates between and within judges.

Two studies test 3 predictions regarding averaging that follow from theorems based on a cognitive model of the judges and idealizations of the judgment situation, showing the extent to which they hold as the information conditions depart from the ideal and as J increases.
...

References

SHOWING 1-10 OF 30 REFERENCES

On the Importance of Random Error in the Study of Probability Judgment. Part II: Applying the Stochastic Judgment Model to Detect Systematic Trends

Erev, Wallsten, and Budescu (1994) and Budescu, Erev, and Wallsten (1997) demonstrated that over- and underconfidence often observed in judgment studies may be due, in part, to the presence of random

Determinants of Overconfidence and Miscalibration: The Roles of Random Error and Ecological Structure☆

Abstract Previous authors have attributed findings of overconfidence to psychological bias or to experimental designs unrepresentative of the environment. This paper provides evidence for an

The Overconfidence Phenomenon as a Consequence of Informal Experimenter-Guided Selection of Almanac Items

Abstract The paper argues for an ecological approach to realism of confidence in general knowledge. It is stressed that choice of answer to almanac items and confidence judgments derive from

Support theory: A nonextensional representation of subjective probability.

This article presents a new theory of subjective probability according to which different descriptions of the same event can give rise to different judgments. The experimental evidence confirms the

Brunswikian and Thurstonian Origins of Bias in Probability Assessment: On the Interpretation of Stochastic Components of Judgment

The Brunswikian framework provided by the theory of Probabilistic Mental Models, and other theoretical stances inspired by probabilisticfunctionalism, is combined with the Thurstonian notion of a stochasticonent of judgment to capture many of the important phenomena in the calibration literature.

Hypothesis Evaluation from a Bayesian Perspective.

Bayesian inference provides a general framework for evaluating hypotheses. It is a normative method in the sense of prescribing how hypotheses should be evaluated. However, it may also be used

Probabilistic mental models: a Brunswikian theory of confidence.

A comprehensive framework for the theory of probabilistic mental models (PMM theory) is proposed, which explains both the overconfidence effect and the hard-easy effect and predicts conditions under which both effects appear, disappear, or invert.

Comparing the calibration and coherence of numerical and verbal probability judgments

Despite the common reliance on numerical probability estimates in decision research and decision analysis, there is considerable interest in the use of verbal probability expressions to communicate

Judgment under uncertainty: Conservatism in human information processing

A number of experiments show that a major cause of conservatism is human misaggregation of the data, which means men perceive each datum accurately and are well aware of its individual diagnostic meaning, but are unable to combine its diagnostic meaning well with the diagnostic meaning of other data when revising their opinions.