Imprecise Probabilities


Consider the uncertainty about whether it will rain in Brisbane next weekend. A weather forecaster may be able to assess a precise probability of rain, such as 0.3285 . . . , although even an expert should feel uncomfortable about specifying a probability to more than one or two decimal places. Someone who has little information about the prospects for rain may be able to make only an imprecise judgment such as “it will probably not rain”, or “it is more likely to rain tomorrow than at the weekend”, or “the probability of rain is between 0.2 and 0.4”. People living outside Australia may be completely ignorant about the weather in Brisbane and assign lower probability 0 and upper probability 1. Probabilities based on extensive data can be distinguished, through their precision, from those based on ignorance. As a simple statistical example, consider an urn containing coloured balls. Initially nothing is known about the colours. How should we model the uncertainty about the colour of the next ball that will be drawn from the urn, and how should we update the model after some balls are sampled? Intuitively, because we are completely ignorant about the colours initially, all conceivable colours should be assigned lower probability 0 and upper probability 1. After repeated sampling with replacement, the posterior upper and lower probabilities of any colour should become increasingly precise and converge to the observed relative frequency of that colour. There are imprecise probability models for the learning process which have these properties, which treat all colours symmetrically, and which are coherent. Imprecise probability models are needed in many applications of probabilistic and statistical reasoning. They have been used in the following kinds of problems:

Cite this paper

@inproceedings{Walley2000ImpreciseP, title={Imprecise Probabilities}, author={Peter Walley}, year={2000} }