Latency Aware Intelligent Task Offloading Scheme for Edge-Fog-Cloud Computing – a Review

The huge volume of data produced by IoT procedures needs the processing power and space for storage provided by cloud, edge, and fog computing systems. Each of these ways of computing has benefits as well as drawbacks. Cloud computing improves the storage of information and computational capability...

Full description

Saved in:
Bibliographic Details
Published in:Informatika i avtomatizaciâ (Online) Vol. 23; no. 1; pp. 284 - 318
Main Authors: Swapna, B, Divya, V
Format: Journal Article
Language:English
Published: Russian Academy of Sciences, St. Petersburg Federal Research Center 11-01-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The huge volume of data produced by IoT procedures needs the processing power and space for storage provided by cloud, edge, and fog computing systems. Each of these ways of computing has benefits as well as drawbacks. Cloud computing improves the storage of information and computational capability while increasing connection delay. Edge computing and fog computing offer similar advantages with decreased latency, but they have restricted storage, capacity, and coverage. Initially, optimization has been employed to overcome the issue of traffic dumping. Conversely, conventional optimization cannot keep up with the tight latency requirements of decision-making in complex systems ranging from milliseconds to sub-seconds. As a result, ML algorithms, particularly reinforcement learning, are gaining popularity since they can swiftly handle offloading issues in dynamic situations involving certain unidentified data. We conduct an analysis of the literature to examine the different techniques utilized to tackle this latency-aware intelligent task offloading issue schemes for cloud, edge, and fog computing. The lessons acquired consequently, from these surveys are then presented in this report. Lastly, we identify some additional avenues for study and problems that must be overcome in order to attain the lowest latency in the task offloading system.
ISSN:2713-3192
2713-3206
DOI:10.15622/ia.23.1.10