How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate… CONTINUE READING
Figure 1 | Bayesian inference. A simple example of Bayesian inference applied to a medical diagnosis problem. Here the problem is diagnosing a rare disease using information from the patient’s symptoms and, potentially, the patient’s genetic marker measurements, which indicate predisposition (gen pred) to this disease. In this example, all variables are assumed to be binary. T, true; F, false. The relationships between variables are indicated by directed arrows and the probability of each variable given other variables they directly depend on is also shown. Yellow nodes denote measurable variables, whereas green nodes denote hidden variables. Using the sum rule (Box 1), the prior probability of the patient having the rare disease is: P(rare disease = T) = P(rare disease = T|gen pred = T) p(gen pred = T) + p(rare disease = T|gen pred = F) p(gen pred = F) = 1.1 × 10−5. Applying Bayes rule we find that for a patient observed to have the symptom, the probability of the rare disease is: p(rare disease = T|symptom = T) = 8.8 × 10−4, whereas for a patient observed to have the genetic marker (gen marker) it is p(rare disease = T|gen marker = T) = 7.9 × 10−4. Assuming that the patient has both the symptom and the genetic marker the probability of the rare disease increases to p(rare disease = T|symptom = T, gen marker = T) = 0.06. Here, we have shown fixed, known model parameters, that is, the numbers θ = (10−4, 0.1, 10−6, 0.8, 0.01, 0.8, 0.01). However, both these parameters and the structure of the model (the presence or absence of arrows and additional hidden variables) could be learned from a data set of patient records using the methods in Box 1.