Multimedia in assessing clinical decision-making skills (CDMS) has been poorly studied, particularly in comparison to traditional text-based assessments. The literature suggests multimedia is more difficult for trainees. We hypothesize that pediatric residents score lower in diagnostic skill when clinical vignettes use multimedia rather than text for patient findings. A standardized method was developed to write text-based questions from 60 high-resolution, quality multimedia; a series of expert panels selected 40 questions with both a multimedia and text-based counterpart, and two online tests were developed. Each test featured 40 identical questions with reciprocal and alternating modality (multimedia vs. text). Pediatric residents and rising 4th year medical students (MS-IV) at a single residency were randomized to complete either test stratified by postgraduate training year (PGY). A mixed between-within subjects ANOVA analyzed differences in score due to modality and PGY. Secondary analyses ascertained modality effect in dermatology and respiratory questions using Mann-Whitney U tests, and correlations on test performance to In-service Training Exam (ITE) scores using Spearman rank. Eighty-eight residents and rising interns completed the study. Overall multimedia scores were lower than text-based scores (p = 0.047, η p(2) = 0.04), with highest disparity in rising interns (MS-IV); however, PGY had a greater effect on scores (p = 0.001, η p(2) = 0.16). Respiratory questions were not significantly lower with multimedia (n = 9, median 0.71 vs. 0.86, p = 0.09) nor dermatology questions (n = 13, p = 0.41). ITEs correlated significantly with text-based scores (ρ = 0.23-0.25, p = 0.04-0.06) but not with multimedia scores. In physician trainees with less clinical experience, multimedia-based case vignettes are associated with significantly lower scores. These results help shed light on the role of multimedia versus text-based information in CDMS, particularly in less experienced clinicians.