TY - JOUR
T1 - Evaluation of an intervention to improve quality of singlebest answer multiple-choice questions
AU - Scott, Kevin R.
AU - King, Andrew M.
AU - Estes, Molly K.
AU - Conlon, Lauren W.
AU - Jones, Jonathan S.
AU - Phillips, Andrew W.
N1 - Publisher Copyright:
© 2019 Scott et al.
PY - 2019
Y1 - 2019
N2 - Introduction: Despite the ubiquity of single-best answer multiple-choice questions (MCQ) in assessments throughout medical education, question writers often receive little to no formal training, potentially decreasing the validity of assessments. While lengthy training opportunities in item writing exist, the availability of brief interventions is limited. Methods: We developed and performed an initial validation of an item-quality assessment tool and measured the impact of a brief educational intervention on the quality of single-best answer MCQs. Results: The item-quality assessment tool demonstrated moderate internal structure evidence when applied to the 20 practice questions (κ=.671, p<.001) and excellent internal structure when applied to the true dataset (κ=0.904, p<.001). Quality scale scores for pre-intervention questions ranged from 2-6 with a mean ± standard deviation (SD) of 3.79 ± 1.23, while post-intervention scores ranged from 4-6 with a mean ± SD of 5.42 ± 0.69. The post-intervention scores were significantly higher than the pre-intervention scores, x 2 (1) =38, p <0.001. Conclusion: Our study demonstrated short-term improvement in single-best answer MCQ writing quality after a brief, open-access lecture, as measured by a simple, novel, grading rubric with reasonable validity evidence.
AB - Introduction: Despite the ubiquity of single-best answer multiple-choice questions (MCQ) in assessments throughout medical education, question writers often receive little to no formal training, potentially decreasing the validity of assessments. While lengthy training opportunities in item writing exist, the availability of brief interventions is limited. Methods: We developed and performed an initial validation of an item-quality assessment tool and measured the impact of a brief educational intervention on the quality of single-best answer MCQs. Results: The item-quality assessment tool demonstrated moderate internal structure evidence when applied to the 20 practice questions (κ=.671, p<.001) and excellent internal structure when applied to the true dataset (κ=0.904, p<.001). Quality scale scores for pre-intervention questions ranged from 2-6 with a mean ± standard deviation (SD) of 3.79 ± 1.23, while post-intervention scores ranged from 4-6 with a mean ± SD of 5.42 ± 0.69. The post-intervention scores were significantly higher than the pre-intervention scores, x 2 (1) =38, p <0.001. Conclusion: Our study demonstrated short-term improvement in single-best answer MCQ writing quality after a brief, open-access lecture, as measured by a simple, novel, grading rubric with reasonable validity evidence.
UR - http://www.scopus.com/inward/record.url?scp=85059883383&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85059883383&partnerID=8YFLogxK
U2 - 10.5811/westjem.2018.11.39805
DO - 10.5811/westjem.2018.11.39805
M3 - Article
C2 - 30643595
AN - SCOPUS:85059883383
SN - 1936-900X
VL - 20
SP - 11
EP - 14
JO - Western Journal of Emergency Medicine
JF - Western Journal of Emergency Medicine
IS - 1
ER -