The paper investigates the statistical eeects which may need to be exploited in supervised learning. It notes that these eeects can be classiied according to their conditionality and their order and proposes that learning algorithms will typically have some form of bias towards particular classes of eeect. It presents the results of an empirical study of the statistical bias of backpropagation. The study involved applying the algorithm to a wide range of learning problems using a variety of diierent internal architectures. The results of the study revealed that backpropagation has a very speciic bias in the general direction of statistical rather than rela-tional eeects. The paper shows how the existence of this bias eeectively constitutes a weakness in the algorithm's ability to discount noise.