TY - JOUR
T1 - Evaluation of ChatGPT-Generated Educational Patient Pamphlets for Common Interventional Radiology Procedures
AU - Kooraki, Soheil
AU - Hosseiny, Melina
AU - Jalili, Mohamamd H.
AU - Rahsepar, Amir Ali
AU - Imanzadeh, Amir
AU - Kim, Grace Hyun
AU - Hassani, Cameron
AU - Abtin, Fereidoun
AU - Moriarty, John M.
AU - Bedayat, Arash
N1 - Publisher Copyright:
© 2024 The Association of University Radiologists
PY - 2024/11
Y1 - 2024/11
N2 - Rationale and Objectives: This study aimed to evaluate the accuracy and reliability of educational patient pamphlets created by ChatGPT, a large language model, for common interventional radiology (IR) procedures. Methods and Materials: Twenty frequently performed IR procedures were selected, and five users were tasked to independently request ChatGPT to generate educational patient pamphlets for each procedure using identical commands. Subsequently, two independent radiologists assessed the content, quality, and accuracy of the pamphlets. The review focused on identifying potential errors, inaccuracies, the consistency of pamphlets. Results: In a thorough analysis of the education pamphlets, we identified shortcomings in 30% (30/100) of pamphlets, with a total of 34 specific inaccuracies, including missing information about sedation for the procedure (10/34), inaccuracies related to specific procedural-related complications (8/34). A key-word co-occurrence network showed consistent themes within each group of pamphlets, while a line-by-line comparison at the level of users and across different procedures showed statistically significant inconsistencies (P < 0.001). Conclusion: ChatGPT-generated education pamphlets demonstrated potential clinical relevance and fairly consistent terminology; however, the pamphlets were not entirely accurate and exhibited some shortcomings and inter-user structural variabilities. To ensure patient safety, future improvements and refinements in large language models are warranted, while maintaining human supervision and expert validation.
AB - Rationale and Objectives: This study aimed to evaluate the accuracy and reliability of educational patient pamphlets created by ChatGPT, a large language model, for common interventional radiology (IR) procedures. Methods and Materials: Twenty frequently performed IR procedures were selected, and five users were tasked to independently request ChatGPT to generate educational patient pamphlets for each procedure using identical commands. Subsequently, two independent radiologists assessed the content, quality, and accuracy of the pamphlets. The review focused on identifying potential errors, inaccuracies, the consistency of pamphlets. Results: In a thorough analysis of the education pamphlets, we identified shortcomings in 30% (30/100) of pamphlets, with a total of 34 specific inaccuracies, including missing information about sedation for the procedure (10/34), inaccuracies related to specific procedural-related complications (8/34). A key-word co-occurrence network showed consistent themes within each group of pamphlets, while a line-by-line comparison at the level of users and across different procedures showed statistically significant inconsistencies (P < 0.001). Conclusion: ChatGPT-generated education pamphlets demonstrated potential clinical relevance and fairly consistent terminology; however, the pamphlets were not entirely accurate and exhibited some shortcomings and inter-user structural variabilities. To ensure patient safety, future improvements and refinements in large language models are warranted, while maintaining human supervision and expert validation.
KW - Chat GPT
KW - Co-occurrence network graph
KW - Education
KW - Interventional radiology
KW - Large language models
UR - http://www.scopus.com/inward/record.url?scp=85195098892&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85195098892&partnerID=8YFLogxK
U2 - 10.1016/j.acra.2024.05.024
DO - 10.1016/j.acra.2024.05.024
M3 - Article
C2 - 38839458
AN - SCOPUS:85195098892
SN - 1076-6332
VL - 31
SP - 4548
EP - 4553
JO - Academic radiology
JF - Academic radiology
IS - 11
ER -