Multiobjective Offloading Optimization in Fog Computing Using Deep Reinforcement Learning

Edge computing allows IoT tasks to be processed by devices with passive processing capacity at the network’s edge and near IoT devices instead of being sent to cloud servers. However, 5G‐enabled architectures such as Fog Radio Access Network (F‐RAN) use smart devices to bring the delay down to even...

Full description

Saved in:
Bibliographic Details
Published in:Journal of computer networks and communications Vol. 2024; no. 1
Main Authors: Mashal, Hojjat, Rezvani, Mohammad Hossein
Format: Journal Article
Language:English
Published: New York Hindawi Limited 26-09-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Edge computing allows IoT tasks to be processed by devices with passive processing capacity at the network’s edge and near IoT devices instead of being sent to cloud servers. However, 5G‐enabled architectures such as Fog Radio Access Network (F‐RAN) use smart devices to bring the delay down to even a few milliseconds. This is important, especially in latency‐sensitive applications such as online digital games. However, a trade‐off must be made between the delay and energy consumption. If too many tasks are processed locally on edge devices or fog servers, energy consumption increases because mobile devices such as smartphones and tablets have limited energy charges. This paper proposes a Deep Reinforcement Learning (DRL) method for offloading optimization. In designing states, we consider all three critical components of memory consumption, the number of CPU cycles, and network mode. This makes the modeling aware of the workload of the tasks. As a result, the model matches the requirements of the real world. For each mobile device that submits a task to the system, we consider a reward. It includes the total delay of tasks and energy consumption. The output of our DRL model specifies to which edge/fog/cloud device each task should be offloaded. The results show that the DRL technique produces less resource waste than RL when the number of tasks is very high. In addition, DRL consumes 30% less network resources than the FIFO method. As a result, DRL provides a better trade‐off between offloading and local execution than other methods.
ISSN:2090-7141
2090-715X
DOI:10.1155/2024/6255511