Reliability and validity of assessing subspecialty level of faculty anesthesiologists' supervision of anesthesiology residents

Gildasio S De Oliveira Jr, Franklin Dexter*, Jane M. Bialek, Robert J McCarthy

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

25 Scopus citations

Abstract

BACKGROUND: Supervision of anesthesiology residents is a major responsibility of faculty (academic) anesthesiologists. Supervision can be evaluated daily for individual anesthesiologists using a 9-question instrument. Faculty anesthesiologists with lesser individual scores contribute to lesser departmental (global) scores. Low (<3, "frequent") department-wide evaluations of supervision are associated with more mistakes with negative consequences to patients. With the long-term aim for residency programs to be evaluated partly based on the quality of their resident supervision, we assessed the 9-item instrument's reliability and validity when used to compare anesthesia programs' rotations nationwide. METHODS: One thousand five hundred residents in the American Society of Anesthesiologists' directory of anesthesia trainees were randomly selected to be participants. Residents were contacted via e-mail and requested to complete a Web-based survey. Nonrespondents were mailed a paper version of the survey. RESULTS: Internal consistency of the supervision scale was excellent, with Cronbach's α = 0.909 (95% CI, 0.896-0.922, n = 641 respondents). Discriminant validity was found based on absence of rank correlation of supervision score with characteristics of the respondents and programs (all P > 0.10): age, hours worked per week, female, year of anesthesia training, weeks in the current rotation, sequence of survey response, size of residency class, and number of survey respondents from the current rotation and program. Convergent validity was found based on significant positive correlation between supervision score and variables related to safety culture (all P < 0.0001): "Overall perceptions of patient safety," "Teamwork within units," "Nonpunitive response to errors," "Handoffs and transitions," "Feedback and communication about error," "Communication openness," and rotation's "overall grade on patient safety." Convergent validity was found also based on significant negative correlation with variables related to the individual resident's burnout (all P < 0.0001): "I feel burnout from my work," "I have become more callous toward people since I took this job," and numbers of "errors with potential negative consequences to patients [that you have] made and/or witnessed." Usefulness was shown by supervision being predicted by the same 1 variable for each of 3 regression tree criteria: "Teamwork within [the rotation]" (e.g., "When one area in this rotation gets busy, others help out"). CONCLUSIONS: Evaluation of the overall quality of supervision of residents by faculty anesthesiologists depends on the reliability and validity of the instrument. Our results show that the 9-item de Oliveira Filho et al. supervision scale can be applied for overall (department, rotation) assessment of anesthesia training programs.

Original languageEnglish (US)
Pages (from-to)209-213
Number of pages5
JournalAnesthesia and Analgesia
Volume120
Issue number1
DOIs
StatePublished - Jan 1 2015

ASJC Scopus subject areas

  • Anesthesiology and Pain Medicine

Fingerprint Dive into the research topics of 'Reliability and validity of assessing subspecialty level of faculty anesthesiologists' supervision of anesthesiology residents'. Together they form a unique fingerprint.

Cite this