Unpacking the Future: A Nudge Toward Wider Subjective Confidence Intervals

  title={Unpacking the Future: A Nudge Toward Wider Subjective Confidence Intervals},
  author={Kriti Jain and Kanchan Mukherjee and J. Neil Bearden and Anil Gaba},
  journal={FEN: Behavioral Finance (Topic)},
Subjective probabilistic judgments in forecasting are inevitable in many real-life domains. A common way to obtain such judgments is to assess fractiles or confidence intervals. However, these judgments tend to be systematically overconfident. Further, it has proved particularly difficult to debias such forecasts and improve the calibration. This paper proposes a simple process that systematically leads to wider confidence intervals, thus reducing overconfidence. With a series of experiments… 
Overconfidence in Probability Distributions: People Know They Don’t Know but They Don’t Know What to Do About It
New methods to analyze judgments about variables which entail both epistemic and aleatory uncertainty are developed and it is found that although SPDs roughly match the aleatory concentration of the real-world distributions, people’s judgments are consistently overconfident.
Aggregating multiple probability intervals to improve calibration
It is demonstrated that collective probability intervals obtained by several heuristics can reduce the typical overconfidence of the individual estimates.
Time unpacking effect and its impact on intertemporal decision making
Time perception/judgment is relevant to everyone and is an integral part of decision making, because any meaningful choices are embedded in a temporal context. The unpacking effect (Tversky &
Wide of the Mark: Evidence on the Underlying Causes of Overprecision in Judgment
Over-precision is the most robust and least understood form of overconfidence. In an attempt to elucidate the underlying causes of over-precision in judgment, the present paper offers a new approach
A strategy to improve expert technology forecasts
A hybrid approach to expert elicitation is outlined that iteratively combines the judgments of technical domain experts with those of experts who are knowledgeable about broader issues of technology adoption and public policy to improve forecasts of future technologies.
Bounded Cognition and Representativeness in Forecasting
Most operations models assume that individuals have perfect beliefs about random variables or stochastic processes. In reality, however, individuals make judgment errors and are subject to
A Behavioral Model of Forecasting: Naive Statistics on Mental Samples
This work uses established psychology on sample naivete to model individuals’ forecasting errors and biases in a way that is portable to operations models and derives 10 behavioral phenomena that are inconsistent with perfect rationality assumptions but supported by existing empirical evidence.
Nudge for environmental protection
One relatively new approach for influencing human behavior, that is based on insights from psychology, that could complement or possibly replace some of the current environmental policies, is to rely
A game-based intervention for the reduction of statistical cognitive biases
This thesis examines if a one-hour game-based intervention can enact a change in the intuitive mental models people have for reasoning about probability and uncertainty in real-life and results of user tests suggest it is possible to alter probabilistic intuitions.


Coherence and Consistency of Investors' Probability Judgments
This study investigates the quality of direct probability judgments and quantile estimates with a focus on calibration and consistency and found that the judgments were internally consistent and coherent, but in most cases they were slightly miscalibrated.
Overconfidence in interval estimates.
The authors show that overconfidence in interval estimates can result from variability in setting interval widths, and that subjective intervals are systematically too narrow given the accuracy of one's information-sometimes only 40% as large as necessary to be well calibrated.
Subjective probability intervals: how to reduce overconfidence by interval evaluation.
In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost abolished when they evaluate the probability that the same intervals include the quantity.
Overconfidence: It Depends on How, What, and Whom You Ask.
Determining why some people, some domains, and some types of judgments are more prone to overconfidence will be important to understanding how confidence judgments are made.
A simple remedy for overprecision in judgment.
A new method is presented that significantly reduces this bias and offers insight into its underlying cause: overprecision was significantly reduced by forcing participants to consider all possible outcomes of an event.
Judgment under uncertainty: A progress report on the training of probability assessors
In prescriptive analyses of decisions under uncertainty, decision makers and their expert advisors are often called upon to assess judgmental probability distributions of quantities whose values are
Support theory: A nonextensional representation of subjective probability.
This article presents a new theory of subjective probability according to which different descriptions of the same event can give rise to different judgments. The experimental evidence confirms the
The evolution of overconfidence
An evolutionary model is presented showing that, counterintuitively, overconfidence maximizes individual fitness and populations tend to become overconfident, as long as benefits from contested resources are sufficiently large compared with the cost of competition.
When 90% confidence intervals are 50% certain: on the credibility of credible intervals
Estimated confidence intervals for general knowledge items are usually too narrow. We report five experiments showing that people have much less confidence in these intervals than dictated by the