Evaluation of the Appropriateness and Readability of ChatGPT-4 Responses to Patient Queries on Uveitis
To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites. Evaluation of technology. Not applicable. The term “uveitis” was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in...
Saved in:
Published in: | Ophthalmology science (Online) Vol. 5; no. 1; p. 100594 |
---|---|
Main Authors: | , , , , , , , , , , , , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Netherlands
Elsevier Inc
01-01-2025
Elsevier |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites.
Evaluation of technology.
Not applicable.
The term “uveitis” was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in the study. Information regarding uveitis for patients was extracted from Healthline, Mayo Clinic, WebMD, National Eye Institute, Ocular Uveitis and Immunology Foundation, American Academy of Ophthalmology, Cleveland Clinic, and National Health Service websites. ChatGPT-4 was then prompted to generate responses about uveitis in both standard and simplified formats. To generate the simplified response, the following request was added to the prompt: 'Please provide a response suitable for the average American adult, at a sixth-grade comprehension level.’ Three dual fellowship-trained specialists, all masked to the sources, graded the appropriateness of the contents (extracted from the existing websites) and responses (generated responses by ChatGPT-4) in terms of personal preference, comprehensiveness, and accuracy. Additionally, 5 readability indices, including Flesch Reading Ease, Flesch–Kincaid Grade Level, Gunning Fog Index, Coleman–Liau Index, and Simple Measure of Gobbledygook index were calculated using an online calculator, Readable.com, to assess the ease of comprehension of each answer.
Personal preference, accuracy, comprehensiveness, and readability of contents and responses about uveitis.
A total of 497 contents and responses, including 71 contents from existing websites, 213 standard responses, and 213 simplified responses from ChatGPT-4 were recorded and graded. Standard ChatGPT-4 responses were preferred and perceived to be more comprehensive by dually trained (uveitis and retina) specialist ophthalmologists while maintaining similar accuracy level compared with existing websites. Moreover, simplified ChatGPT-4 responses matched almost all existing websites in terms of personal preference, accuracy, and comprehensiveness. Notably, almost all readability indices suggested that standard ChatGPT-4 responses demand a higher educational level for comprehension, whereas simplified responses required lower level of education compared with the existing websites.
This study shows that ChatGPT can provide patients with an avenue to access comprehensive and accurate information about uveitis, tailored to their educational level.
The author(s) have no proprietary or commercial interest in any materials discussed in this article. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2666-9145 2666-9145 |
DOI: | 10.1016/j.xops.2024.100594 |