Woman consulting library books

RESEARCH LIBRARY

Showing 21 - 30 of 40 Reseach Library Publications
Posted: June 1, 2018 | P. Harik, B. E. Clauser, I. Grabovsky, P. Baldwin, M. Margolis, D. Bucak, M. Jodoin, W. Walsh, S. Haist

Journal of Educational Measurement, 55: 308-327

 

The widespread move to computerized test delivery has led to the development of new approaches to evaluating how examinees use testing time and to new metrics designed to provide evidence about the extent to which time limits impact performance. Much of the existing research is based on these types of observational metrics; relatively few studies use randomized experiments to evaluate the impact time limits on scores. Of those studies that do report on randomized experiments, none directly compare the experimental results to evidence from observational metrics to evaluate the extent to which these metrics are able to sensitively identify conditions in which time constraints actually impact scores. The present study provides such evidence based on data from a medical licensing examination.

Posted: June 1, 2018 | S. D. Stites, K. Harkins, J. D. Rubright, J. Karlawish

Alzheimer Disease & Associated Disorders: October–December 2018 - Volume 32 - Issue 4 - p 276-283

 

The purpose of this study is to examine the relationship between self-reports of cognitive complaints and quality of life (QOL) in persons with varying degrees of cognitive impairment.

Posted: June 1, 2018 | M. von Davier, J. H. Shin, L. Khorramdel, L. Stankov

Applied Psychological Measurement: Volume: 42 issue: 4, page(s): 291-306

 

The research presented in this article combines mathematical derivations and empirical results to investigate effects of the nonparametric anchoring vignette approach proposed by King, Murray, Salomon, and Tandon on the reliability and validity of rating data. The anchoring vignette approach aims to correct rating data for response styles to improve comparability across individuals and groups.

Posted: May 7, 2018 | D. Jurich, L. M. Duhigg, T. J. Plumb, S. A. Haist, J. L. Hawley, R. S. Lipner, L. Smith, S. M. Norby

CJASN May 2018, 13 (5) 710-717

 

Medical specialty and subspecialty fellowship programs administer subject-specific in-training examinations to provide feedback about level of medical knowledge to fellows preparing for subsequent board certification. This study evaluated the association between the American Society of Nephrology In-Training Examination and the American Board of Internal Medicine Nephrology Certification Examination in terms of scores and passing status.

Posted: May 1, 2018 | K. Short, S. D. Bucak, F. Rosenthal, M. R. Raymond

Academic Medicine: May 2018 - Volume 93 - Issue 5 - p 781-785

 

In 2007, the United States Medical Licensing Examination embedded multimedia simulations of heart sounds into multiple-choice questions. This study investigated changes in item difficulty as determined by examinee performance over time. The data reflect outcomes obtained following initial use of multimedia items from 2007 through 2012, after which an interface change occurred.

Posted: April 3, 2018 | Z. Jiang, M.R. Raymond

Applied Psychological Measurement: Volume: 42 issue: 8, page(s): 595-612

 

Conventional methods for evaluating the utility of subscores rely on reliability and correlation coefficients. However, correlations can overlook a notable source of variability: variation in subtest means/difficulties. Brennan introduced a reliability index for score profiles based on multivariate generalizability theory, designated as G, which is sensitive to variation in subtest difficulty. However, there has been little, if any, research evaluating the properties of this index. A series of simulation experiments, as well as analyses of real data, were conducted to investigate G under various conditions of subtest reliability, subtest correlations, and variability in subtest means.

Posted: April 3, 2018 | I. Kirsch, W. Thorn, M. von Davier

Quality Assurance in Education, Vol. 26 No. 2, pp. 150-152

 

An introduction to a special issue of Quality Assurance in Education featuring papers based on presentations at a two-day international seminar on managing the quality of data collection in large-scale assessments.

Posted: April 3, 2018 | M. von Davier

Quality Assurance in Education, Vol. 26 No. 2, pp. 243-262

 

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent variable modeling approaches extended by explanatory variables that allow detection of survey errors in skill surveys.

Posted: April 1, 2018 | R. A. Feinberg, D. P. Jurich, L. M. Foster

Academic Medicine: April 2018 - Volume 93 - Issue 4 - p 636-641

 

Increasing criticism of maintenance of certification (MOC) examinations has prompted certifying boards to explore alternative assessment formats. The purpose of this study was to examine the effect of allowing test takers to access reference material while completing their MOC Part III standardized examination.

Posted: April 1, 2018 | R. A. Feinberg, D. P. Jurich, L. M. Foster

Academic Medicine: November 2018 - Volume 93 - Issue 11S - p S14-S20

 

An important goal of medical education is to teach students to use an electronic health record (EHR) safely and effectively. The purpose of this study is to examine medical student accounts of EHR use during their core inpatient clinical clerkships using a national sample. Paper health records (PHRs) are similarly examined.