Exploring Language Patterns in a Medical Licensure Exam Item Bank

  title={Exploring Language Patterns in a Medical Licensure Exam Item Bank},
  author={Swati Padhee and Kimberly A. Swygert and Ian Micir},
  journal={2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)},
  • Swati Padhee, K. Swygert, Ian Micir
  • Published 20 November 2021
  • Psychology, Computer Science
  • 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
This study examines the use of natural language processing (NLP) models to evaluate whether language patterns used by item writers in a medical licensure exam might contain evidence of biased or stereotypical language. This type of bias in item language choices can be particularly impactful for items in a medical licensure assessment, as it could pose a threat to content validity and defensibility of test score validity evidence. To the best of our knowledge, this is the first attempt using… 

Figures and Tables from this paper


Implicit bias in healthcare professionals: a systematic review
The evidence indicates that healthcare professionals exhibit the same levels of implicit bias as the wider population, and the need for the healthcare profession to address the role of implicit biases in disparities in healthcare is highlighted.
Publicly Available Clinical BERT Embeddings
This work explores and releases two BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically, and demonstrates that using a domain-specific model yields performance improvements on 3/5 clinical NLP tasks, establishing a new state-of-the-art on the MedNLI dataset.
Semantics derived automatically from language corpora contain human-like biases
It is shown that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT), and that applying machine learning to ordinary human language results in human-like semantic biases.
Item bias and item response theory
Physicians’ Implicit and Explicit Attitudes About Race by MD Race, Ethnicity, and Gender
Implicit and explicit attitudes about race using the Race Attitude Implicit Association Test for a large sample of test takers, including a sub-sample of medical doctors, showed an implicit preference for White Americans relative to Black Americans and women showed less implicit bias than men.
Implicit Bias and Its Relation to Health Disparities: A Teaching Program and Survey of Medical Students
An educational intervention addressing both health disparities and physician implicit bias and the results of a subsequent survey exploring medical students’ attitudes and beliefs toward subconscious bias and health disparities support the value of teaching medical students to recognize their own implicit biases and develop skills to overcome them in each patient encounter.
SciBERT: A Pretrained Language Model for Scientific Text
SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.
The influence of patient sex, provider sex, and sexist attitudes on pain treatment decisions.
Measuring individual differences in implicit cognition: the implicit association test.
An implicit association test (IAT) measures differential association of 2 target concepts with an attribute when instructions oblige highly associated categories to share a response key, and performance is faster than when less associated categories share a key.
Engendering pain management practices: the role of physician sex on chronic low-back pain assessment and treatment prescriptions.