Deep and reinforcement learning for automated task scheduling in large‐scale cloud computing systems

Summary Cloud computing is undeniably becoming the main computing and storage platform for today's major workloads. From Internet of things and Industry 4.0 workloads to big data analytics and decision‐making jobs, cloud systems daily receive a massive number of tasks that need to be simultaneo...

Full description

Saved in:
Bibliographic Details
Published in:Concurrency and computation Vol. 33; no. 23
Main Authors: Rjoub, Gaith, Bentahar, Jamal, Abdel Wahab, Omar, Saleh Bataineh, Ahmed
Format: Journal Article
Language:English
Published: Hoboken Wiley Subscription Services, Inc 10-12-2021
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary Cloud computing is undeniably becoming the main computing and storage platform for today's major workloads. From Internet of things and Industry 4.0 workloads to big data analytics and decision‐making jobs, cloud systems daily receive a massive number of tasks that need to be simultaneously and efficiently mapped onto the cloud resources. Therefore, deriving an appropriate task scheduling mechanism that can both minimize tasks' execution delay and cloud resources utilization is of prime importance. Recently, the concept of cloud automation has emerged to reduce the manual intervention and improve the resource management in large‐scale cloud computing workloads. In this article, we capitalize on this concept and propose four deep and reinforcement learning‐based scheduling approaches to automate the process of scheduling large‐scale workloads onto cloud computing resources, while reducing both the resource consumption and task waiting time. These approaches are: reinforcement learning (RL), deep Q networks, recurrent neural network long short‐term memory (RNN‐LSTM), and deep reinforcement learning combined with LSTM (DRL‐LSTM). Experiments conducted using real‐world datasets from Google Cloud Platform revealed that DRL‐LSTM outperforms the other three approaches. The experiments also showed that DRL‐LSTM minimizes the CPU usage cost up to 67% compared with the shortest job first (SJF), and up to 35% compared with both the round robin (RR) and improved particle swarm optimization (PSO) approaches. Moreover, our DRL‐LSTM solution decreases the RAM memory usage cost up to 72% compared with the SJF, up to 65% compared with the RR, and up to 31.25% compared with the improved PSO.
Bibliography:Funding information
Defence Research and Development Canada (DRDC), Innovation for Defence Excellence and Security (IDEaS); Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery grant
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.5919