Performance of trauma-trained large language models on surgical assessment questions: A new approach in resource identification
Large language models have successfully navigated simulated medical board examination questions. However, whether and how language models can be used in surgical education is less understood. Our study evaluates the efficacy of domain-specific large language models in curating study materials for su...
Saved in:
Published in: | Surgery |
---|---|
Main Authors: | , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
United States
Elsevier Inc
23-09-2024
|
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Large language models have successfully navigated simulated medical board examination questions. However, whether and how language models can be used in surgical education is less understood. Our study evaluates the efficacy of domain-specific large language models in curating study materials for surgical board style questions.
We developed EAST-GPT and ACS-GPT, custom large language models with domain-specific knowledge from published guidelines from the Eastern Association of the Surgery of Trauma and the American College of Surgeons Trauma Quality Programs. EAST-GPT, ACS-GPT, and an untrained GPT-4 performance were assessed trauma-related questions from Surgical Education and Self-Assessment Program (18th edition). Large language models were asked to choose answers and provide answer rationales. Rationales were assessed against an educational framework with 5 domains: accuracy, relevance, comprehensiveness, evidence-base, and clarity.
Ninety guidelines trained EAST-GPT and 10 trained ACS-GPT. All large language models were tested on 62 trauma questions. EAST-GPT correctly answered 76%, whereas ACS-GPT answered 68% correctly. Both models outperformed ChatGPT-4 (P < .05), which answered 45% correctly. For reasoning, EAST-GPT achieved the gratest mean scores across all 5 educational framework metrics. ACS-GPT scored lower than ChatGPT-4 in comprehensiveness and evidence-base; however, these differences were not statistically significant.
Our study presents a novel methodology in identifying test-preparation resources by training a large language model to answer board-style multiple choice questions. Both trained models outperformed ChatGPT-4, demonstrating its answers were accurate, relevant, and evidence-based. Potential implications of such AI integration into surgical education must be explored. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0039-6060 1532-7361 1532-7361 |
DOI: | 10.1016/j.surg.2024.08.026 |