Assessing the Generalizability of a Clinical Machine Learning Model Across Multiple Emergency Departments

To assess the generalizability of a clinical machine learning algorithm across multiple emergency departments (EDs). We obtained data on all ED visits at our health care system’s largest ED from May 5, 2018, to December 31, 2019. We also obtained data from 3 satellite EDs and 1 distant-hub ED from M...

Full description

Saved in:
Bibliographic Details
Published in:Mayo Clinic proceedings. Innovations, quality & outcomes Vol. 6; no. 3; pp. 193 - 199
Main Authors: Ryu, Alexander J., Romero-Brufau, Santiago, Qian, Ray, Heaton, Heather A., Nestler, David M., Ayanian, Shant, Kingsley, Thomas C.
Format: Journal Article
Language:English
Published: Netherlands Elsevier Inc 01-06-2022
Elsevier
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To assess the generalizability of a clinical machine learning algorithm across multiple emergency departments (EDs). We obtained data on all ED visits at our health care system’s largest ED from May 5, 2018, to December 31, 2019. We also obtained data from 3 satellite EDs and 1 distant-hub ED from May 1, 2018, to December 31, 2018. A gradient-boosted machine model was trained on pooled data from the included EDs. To prevent the effect of differing training set sizes, the data were randomly downsampled to match those of our smallest ED. A second model was trained on this downsampled, pooled data. The model’s performance was compared using area under the receiver operating characteristic (AUC). Finally, site-specific models were trained and tested across all the sites, and the importance of features was examined to understand the reasons for differing generalizability. The training data sets contained 1918-64,161 ED visits. The AUC for the pooled model ranged from 0.84 to 0.94 across the sites; the performance decreased slightly when Ns were downsampled to match those of our smallest ED site. When site-specific models were trained and tested across all the sites, the AUCs ranged more widely from 0.71 to 0.93. Within a single ED site, the performance of the 5 site-specific models was most variable for our largest and smallest EDs. Finally, when the importance of features was examined, several features were common to all site-specific models; however, the weight of these features differed. A machine learning model for predicting hospital admission from the ED will generalize fairly well within the health care system but will still have significant differences in AUC performance across sites because of site-specific factors.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2542-4548
2542-4548
DOI:10.1016/j.mayocpiqo.2022.03.003