Estimates of sensitivity and specificity can be biased by the preferential referral of patients with positive test responses or ancillary clinical abnormalities (the "concomitant information vector") for diagnostic verification. When these biased estimates are analyzed by Bayes' theorem, the resultant posterior disease probabilities (positive and negative predictive accuracies) are similarly biased. Accordingly, a series of computer simulations was performed to quantify the effects of various degrees of verification bias on the calculation of predictive accuracy using Bayes' theorem. The magnitudes of the errors in the observed true-positive rate (sensitivity) and false-positive rate (the complement of specificity) ranged from +11% and +23%, respectively (when the test response and the concomitant information vector were conditionally independent), to +16% and +48% (when they were conditionally non-independent). These errors produced absolute underestimations as high as 22% in positive predictive accuracy, and as high as 14% in negative predictive accuracy, when analyzed by Bayes' theorem at a base rate of 50%. Mathematical correction for biased verification based on the test response using a previously published algorithm significantly reduced these errors by as much as 20%. These data indicate 1) that selection bias significantly distorts the determination of predictive accuracies calculated by Bayes' theorem, and 2) that these distortions can be significantly offset by a correction algorithm.