Investigating the impact of innovative AI chatbot on post‐pandemic medical education and clinical assistance: a comprehensive analysis

Background The COVID‐19 pandemic has significantly disrupted clinical experience and exposure of medical students and junior doctors. Artificial Intelligence (AI) integration in medical education has the potential to enhance learning and improve patient care. This study aimed to evaluate the effecti...

Full description

Saved in:
Bibliographic Details
Published in:ANZ journal of surgery Vol. 94; no. 1-2; pp. 68 - 77
Main Authors: Xie, Yi, Seth, Ishith, Hunter‐Smith, David J., Rozen, Warren M., Seifman, Marc A.
Format: Journal Article
Language:English
Published: Melbourne John Wiley & Sons Australia, Ltd 01-02-2024
Blackwell Publishing Ltd
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Background The COVID‐19 pandemic has significantly disrupted clinical experience and exposure of medical students and junior doctors. Artificial Intelligence (AI) integration in medical education has the potential to enhance learning and improve patient care. This study aimed to evaluate the effectiveness of three popular large language models (LLMs) in serving as clinical decision‐making support tools for junior doctors. Methods A series of increasingly complex clinical scenarios were presented to ChatGPT, Google's Bard and Bing's AI. Their responses were evaluated against standard guidelines, and for reliability by the Flesch Reading Ease Score, Flesch–Kincaid Grade Level, the Coleman‐Liau Index, and the modified DISCERN score for assessing suitability. Lastly, the LLMs outputs were assessed by using the Likert scale for accuracy, informativeness, and accessibility by three experienced specialists. Results In terms of readability and reliability, ChatGPT stood out among the three LLMs, recording the highest scores in Flesch Reading Ease (31.2 ± 3.5), Flesch–Kincaid Grade Level (13.5 ± 0.7), Coleman–Lau Index (13) and DISCERN (62 ± 4.4). These results suggest statistically significant superior comprehensibility and alignment with clinical guidelines in the medical advice given by ChatGPT. Bard followed closely behind, with BingAI trailing in all categories. The only non‐significant statistical differences (P > 0.05) were found between ChatGPT and Bard's readability indices, and between the Flesch Reading Ease scores of ChatGPT/Bard and BingAI. Conclusion This study demonstrates the potential utility of LLMs in fostering self‐directed and personalized learning, as well as bolstering clinical decision‐making support for junior doctors. However further development is needed for its integration into education.
Bibliography:Yi Xie and Ishith Seth have contributed equally as first authors.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1445-1433
1445-2197
1445-2197
DOI:10.1111/ans.18666