Exploring Diagnostic Precision and Triage Proficiency: A Comparative Study of GPT-4 and Bard in Addressing Common Ophthalmic Complaints

Roya Zandi, Joseph D. Fahey, Michael Drakopoulos, John M. Bryan, Siyuan Dong, Paul J. Bryar, Ann E. Bidwell, R. Chris Bowen, Jeremy A. Lavine, Rukhsana G. Mirza*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

In the modern era, patients often resort to the internet for answers to their health-related concerns, and clinics face challenges to providing timely response to patient concerns. This has led to a need to investigate the capabilities of AI chatbots for ophthalmic diagnosis and triage. In this in silico study, 80 simulated patient complaints in ophthalmology with varying urgency levels and clinical descriptors were entered into both ChatGPT and Bard in a systematic 3-step submission process asking chatbots to triage, diagnose, and evaluate urgency. Three ophthalmologists graded chatbot responses. Chatbots were significantly better at ophthalmic triage than diagnosis (90.0% appropriate triage vs. 48.8% correct leading diagnosis; p < 0.001), and GPT-4 performed better than Bard for appropriate triage recommendations (96.3% vs. 83.8%; p = 0.008), grader satisfaction for patient use (81.3% vs. 55.0%; p < 0.001), and lower potential harm rates (6.3% vs. 20.0%; p = 0.010). More descriptors improved the accuracy of diagnosis for both GPT-4 and Bard. These results indicate that chatbots may not need to recognize the correct diagnosis to provide appropriate ophthalmic triage, and there is a potential utility of these tools in aiding patients or triage staff; however, they are not a replacement for professional ophthalmic evaluation or advice.

Original languageEnglish (US)
Article number120
JournalBioengineering
Volume11
Issue number2
DOIs
StatePublished - Feb 2024

Funding

This work was funded in part by an unrestricted departmental grant from Research to Prevent Blindness. JAL was supported by NIH grant K08 EY030923, R01 EY034486, and the Research to Prevent Blindness Sybil B. Harrington Career Development Award for Macular Degeneration. The funding agency had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. J.A.L. is a consultant for Genentech, Inc. R.C.B. is a cofounder of Stream Dx, Inc. R.G.M. has received research support from Google Inc. No party had any role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Keywords

  • ChatGPT
  • artificial intelligence
  • bard
  • chatbots
  • large language models
  • ophthalmology
  • triage

ASJC Scopus subject areas

  • Bioengineering

Fingerprint

Dive into the research topics of 'Exploring Diagnostic Precision and Triage Proficiency: A Comparative Study of GPT-4 and Bard in Addressing Common Ophthalmic Complaints'. Together they form a unique fingerprint.

Cite this