TY - JOUR
T1 - Debriefing assessment for simulation in healthcare
T2 - Development and psychometric properties
AU - Brett-Fleegler, Marisa
AU - Rudolph, Jenny
AU - Eppich, Walter J
AU - Monuteaux, Michael
AU - Fleegler, Eric
AU - Cheng, Adam
AU - Simon, Robert
PY - 2012/10/1
Y1 - 2012/10/1
N2 - INTRODUCTION: This study examined the reliability of the scores of an assessment instrument, the Debriefing Assessment for Simulation in Healthcare (DASH), in evaluating the quality of health care simulation debriefings. The secondary objective was to evaluate whether the instrument's scores demonstrate evidence of validity. METHODS: Two aspects of reliability were examined, interrater reliability and internal consistency. To assess interrater reliability, intraclass correlations were calculated for 114 simulation instructors enrolled in webinar training courses in the use of the DASH. The instructors reviewed a series of 3 standardized debriefing sessions. To assess internal consistency, Cronbach α was calculated for this cohort. Finally, 1 measure of validity was examined by comparing the scores across 3 debriefings of different quality. RESULTS: Intraclass correlation coefficients for the individual elements were predominantly greater than 0.6. The overall intraclass correlation coefficient for the combined elements was 0.74. Cronbach α was 0.89 across the webinar raters. There were statistically significant differences among the ratings for the 3 standardized debriefings (P < 0.001). CONCLUSIONS: The DASH scores showed evidence of good reliability and preliminary evidence of validity. Additional work will be needed to assess the generalizability of the DASH based on the psychometrics of DASH data from other settings.
AB - INTRODUCTION: This study examined the reliability of the scores of an assessment instrument, the Debriefing Assessment for Simulation in Healthcare (DASH), in evaluating the quality of health care simulation debriefings. The secondary objective was to evaluate whether the instrument's scores demonstrate evidence of validity. METHODS: Two aspects of reliability were examined, interrater reliability and internal consistency. To assess interrater reliability, intraclass correlations were calculated for 114 simulation instructors enrolled in webinar training courses in the use of the DASH. The instructors reviewed a series of 3 standardized debriefing sessions. To assess internal consistency, Cronbach α was calculated for this cohort. Finally, 1 measure of validity was examined by comparing the scores across 3 debriefings of different quality. RESULTS: Intraclass correlation coefficients for the individual elements were predominantly greater than 0.6. The overall intraclass correlation coefficient for the combined elements was 0.74. Cronbach α was 0.89 across the webinar raters. There were statistically significant differences among the ratings for the 3 standardized debriefings (P < 0.001). CONCLUSIONS: The DASH scores showed evidence of good reliability and preliminary evidence of validity. Additional work will be needed to assess the generalizability of the DASH based on the psychometrics of DASH data from other settings.
KW - Assessment
KW - Behaviorally anchored rating scale
KW - Debriefing
KW - Health care education
KW - Medical education
KW - Psychometrics
KW - Simulation
UR - http://www.scopus.com/inward/record.url?scp=84867204484&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84867204484&partnerID=8YFLogxK
U2 - 10.1097/SIH.0b013e3182620228
DO - 10.1097/SIH.0b013e3182620228
M3 - Article
C2 - 22902606
AN - SCOPUS:84867204484
VL - 7
SP - 288
EP - 294
JO - Simulation in Healthcare
JF - Simulation in Healthcare
SN - 1559-2332
IS - 5
ER -