TY - JOUR
T1 - Cognitive, Social and Environmental Sources of Bias in Clinical Performance Ratings
AU - Williams, Reed G.
AU - Klamen, Debra A.
AU - McGaghie, William C.
PY - 2003
Y1 - 2003
N2 - Background: Global ratings based on observing convenience samples of clinical performance form the primary basis for appraising the clinical competence of medical students, residents, and practicing physicians. This review explores cognitive, social, and environmental factors that contribute unwanted sources of score variation (bias) to clinical performance evaluations. Summary: Raters have a 1 or 2-dimensional conception of clinical performance and do not recall details. Good news is reported more quickly and fully than bad news, leading to overly generous performance evaluations. Training has little impact on accuracy and reproducibility of clinical performance ratings. Conclusions: Clinical performance evaluation systems should assure broad, systematic sampling of clinical situations; keep rating instruments short; encourage immediate feedback for teaching and learning purposes; encourage maintenance of written performance notes to support delayed clinical performance ratings; give raters feedback about their ratings; supplement formal with unobtrusive observation; make promotion decisions via group review; supplement traditional observation with other clinical skills measures (e.g., Objective Structured Clinical Examination); encourage rating of specific performances rather than global ratings; and establish the meaning of ratings in the manner used to set normal limits for clinical diagnostic investigations.
AB - Background: Global ratings based on observing convenience samples of clinical performance form the primary basis for appraising the clinical competence of medical students, residents, and practicing physicians. This review explores cognitive, social, and environmental factors that contribute unwanted sources of score variation (bias) to clinical performance evaluations. Summary: Raters have a 1 or 2-dimensional conception of clinical performance and do not recall details. Good news is reported more quickly and fully than bad news, leading to overly generous performance evaluations. Training has little impact on accuracy and reproducibility of clinical performance ratings. Conclusions: Clinical performance evaluation systems should assure broad, systematic sampling of clinical situations; keep rating instruments short; encourage immediate feedback for teaching and learning purposes; encourage maintenance of written performance notes to support delayed clinical performance ratings; give raters feedback about their ratings; supplement formal with unobtrusive observation; make promotion decisions via group review; supplement traditional observation with other clinical skills measures (e.g., Objective Structured Clinical Examination); encourage rating of specific performances rather than global ratings; and establish the meaning of ratings in the manner used to set normal limits for clinical diagnostic investigations.
UR - http://www.scopus.com/inward/record.url?scp=0242521591&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0242521591&partnerID=8YFLogxK
U2 - 10.1207/S15328015TLM1504_11
DO - 10.1207/S15328015TLM1504_11
M3 - Review article
C2 - 14612262
AN - SCOPUS:0242521591
SN - 1040-1334
VL - 15
SP - 270
EP - 292
JO - Teaching and Learning in Medicine
JF - Teaching and Learning in Medicine
IS - 4
ER -