In the past two decades of psychometric research, an array of extended item response models has been proposed to capture the complex nature of human cognition. While the literature abounds in model fit analysis, the debate on model selection in different testing conditions continues. This study examines the problems of model selection in computer adaptive testing (CAT) of cognitive errors by comparing the relative measurement efficiency of polytomous modeling over dichotomous modeling under different scoring schemes and termination criteria. Monte Carlo simulation was adopted as the inquiry paradigm to generate 1000 subjects and 100 items in the calibration sample and 200 simulees in the CAT sample. The results suggest that polytomous CAT yields marginal gains over dichotomous CAT when termination criteria are more stringent (shorter test length or smaller standard error of ability estimate). When the conventional dichotomous scoring scheme is adopted, in which all partially correct answers are scored as incorrect, polytomous CAT cannot prevent the non-uniform gain in test information as was observed in paper-and-pencil testing.