Methodology to Create Analysis-Naive Holdout Records as well as Train and Test Records for Machine Learning Analyses in Healthcare
It is common for researchers to holdout data from a study pool to be used for external validation as well as for future research, and the same desire is true to those using machine learning modeling research. For this discussion, the purpose of the holdout sample it is preserve data for research stu...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
08-05-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | It is common for researchers to holdout data from a study pool to be used for
external validation as well as for future research, and the same desire is true
to those using machine learning modeling research. For this discussion, the
purpose of the holdout sample it is preserve data for research studies that
will be analysis-naive and randomly selected from the full dataset.
Analysis-naive are records that are not used for testing or training machine
learning (ML) models and records that do not participate in any aspect of the
current machine learning study. The methodology suggested for creating holdouts
is a modification of k-fold cross validation, which takes into account
randomization and efficiently allows a three-way split (holdout, test and
training) as part of the method without forcing. The paper also provides a
working example using set of automated functions in Python and some scenarios
for applicability in healthcare. |
---|---|
DOI: | 10.48550/arxiv.2205.03987 |