Block size estimation for data partitioning in HPC applications using machine learning techniques
Journal of Big Data, vol. 11, n. 19, 2024 The extensive use of HPC infrastructures and frameworks for running dataintensive applications has led to a growing interest in data partitioning techniques and strategies. In fact, application performance can be heavily affected by how data are partitioned,...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
31-01-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Journal of Big Data, vol. 11, n. 19, 2024 The extensive use of HPC infrastructures and frameworks for running
dataintensive applications has led to a growing interest in data partitioning
techniques and strategies. In fact, application performance can be heavily
affected by how data are partitioned, which in turn depends on the selected
size for data blocks, i.e. the block size. Therefore, finding an effective
partitioning, i.e. a suitable block size, is a key strategy to speed-up
parallel data-intensive applications and increase scalability. This paper
describes a methodology, namely BLEST-ML (BLock size ESTimation through Machine
Learning), for block size estimation that relies on supervised machine learning
techniques. The proposed methodology was evaluated by designing an
implementation tailored to dislib, a distributed computing library highly
focused on machine learning algorithms built on top of the PyCOMPSs framework.
We assessed the effectiveness of the provided implementation through an
extensive experimental evaluation considering different algorithms from dislib,
datasets, and infrastructures, including the MareNostrum 4 supercomputer. The
results we obtained show the ability of BLEST-ML to efficiently determine a
suitable way to split a given dataset, thus providing a proof of its
applicability to enable the efficient execution of data-parallel applications
in high performance environments. |
---|---|
DOI: | 10.48550/arxiv.2211.10819 |