library bookshelves

RESEARCH LIBRARY

View the latest publications from members of the NBME research team

Showing 1 - 10 of 52 Research Library Publications
Posted: | John Norcini, Irina Grabovsky, Michael A. Barone, M. Brownell Anderson, Ravi S. Pandian, Alex J. Mechaber

Academic Medicine: Volume 99 - Issue 3 - p 325-330

 

This retrospective cohort study investigates the association between United States Medical Licensing Examination (USMLE) scores and outcomes in 196,881 hospitalizations in Pennsylvania over 3 years.

Posted: | Victoria Yaneva, Peter Baldwin, Daniel P. Jurich, Kimberly Swygert, Brian E. Clauser

Academic Medicine: Volume 99 - Issue 2 - p 192-197

 

This report investigates the potential of artificial intelligence (AI) agents, exemplified by ChatGPT, to perform on the United States Medical Licensing Examination (USMLE), following reports of its successful performance on sample items. 

Posted: | Daniel Jurich, Chunyan Liu

Applied Measurement Education: Volume 36, Issue 4, Pages 326-339

 

This study examines strategies for detecting parameter drift in small-sample equating, crucial for maintaining score comparability in high-stakes exams. Results suggest that methods like mINFIT, mOUTFIT, and Robust-z effectively mitigate drifting anchor items' effects, while caution is advised with the Logit Difference approach. Recommendations are provided for practitioners to manage item parameter drift in small-sample settings.
 

Posted: | Daniel P. Jurich, Matthew J. Madison

Educational Assessment

 

This study proposes four indices to quantify item influence and distinguishes them from other available item and test measures. We use simulation methods to evaluate and provide guidelines for interpreting each index, followed by a real data application to illustrate their use in practice. We discuss theoretical considerations regarding when influence presents a psychometric concern and other practical concerns such as how the indices function when reducing influence imbalance.

Posted: | King Yiu Suen, Victoria Yaneva, Le An Ha, Janet Mee, Yiyun Zhou, Polina Harik

Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), Pages 443-447

 

This paper presents the ACTA system, which performs automated short-answer grading in the domain of high-stakes medical exams. The system builds upon previous work on neural similarity-based grading approaches by applying these to the medical domain and utilizing contrastive learning as a means to optimize the similarity metric. 

Posted: | Victoria Yaneva, Peter Baldwin, Le An Ha, Christopher Runyon

Advancing Natural Language Processing in Educational Assessment: Pages 167-182

 

This chapter discusses the evolution of natural language processing (NLP) approaches to text representation and how different ways of representing text can be utilized for a relatively understudied task in educational assessment – that of predicting item characteristics from item text.

Posted: | Polina Harik, Janet Mee, Christopher Runyon, Brian E. Clauser

Advancing Natural Language Processing in Educational Assessment: Pages 58-73

 

This chapter describes INCITE, an NLP-based system for scoring free-text responses. It emphasizes the importance of context and the system’s intended use and explains how each component of the system contributed to its accuracy.

Posted: | Matthias von Davier, Brian Clauser

Essays on Contemporary Psychometrics: Pages 163-180

 

This paper shows that using non-linear functions for equating and score transformations leads to consequences that are not commensurable with classical test theory (CTT). More specifically, a well-known theorem from calculus shows that the expected value of a non-linearly transformed variable does not equal the transformed expected value of this variable.

Posted: | Christopher Runyon, Polina Harik, Michael Barone

Diagnosis: Volume 10, Issue 1, Pages 54-60

 

This op-ed discusses the advantages of leveraging natural language processing (NLP) in the assessment of clinical reasoning. It also provides an overview of INCITE, the Intelligent Clinical Text Evaluator, a scalable NLP-based computer-assisted scoring system that was developed to measure clinical reasoning ability as assessed in the written documentation portion of the now-discontinued USMLE Step 2 Clinical Skills examination. 

Posted: | Hanin Rashid, Christopher Runyon, Jesse Burk-Rafel, Monica M. Cuddy, Liselotte Dyrbye, Katie Arnhart, Ulana Luciw-Dubas, Hilit F. Mechaber, Steve Lieberman, Miguel Paniagua

Academic Medicine: Volume 97 - Issue 11S - Page S176

 

As Step 1 begins to transition to pass/fail, it is interesting to consider the impact of score goal on wellness. This study examines the relationship between goal score, gender, and students’ self-reported anxiety, stress, and overall distress immediately following their completion of Step 1.