Task Offloading Optimization in Digital Twin Assisted MEC-Enabled Air-Ground IIoT 6G Networks

The upcoming 6G paradigm leverages digital twin (DT) to create a virtual replica of network topology, facilitating real-time control in complex environments. In intelligent manufacturing, where ultra-low latency is essential, the Industrial Internet of Things (IIoT) adopts mobile edge computing (MEC...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on vehicular technology Vol. 73; no. 11; pp. 17527 - 17542
Main Authors: Hevesli, Muhammet, Seid, Abegaz Mohammed, Erbad, Aiman, Abdallah, Mohamed
Format: Journal Article
Language:English
Published: New York IEEE 01-11-2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The upcoming 6G paradigm leverages digital twin (DT) to create a virtual replica of network topology, facilitating real-time control in complex environments. In intelligent manufacturing, where ultra-low latency is essential, the Industrial Internet of Things (IIoT) adopts mobile edge computing (MEC) for task offloading, compensating for the limited energy and computational resources of its devices. In areas with network congestion or limited coverage, Unmanned aerial vehicles (UAVs) extend the network reach and relay tasks to central nodes. To address the challenges of resource allocation and computational offloading in 6G-enabled IIoT air-ground networks, this paper presents a novel architecture called Dynamic DT Edge Air-Ground Network (D2TEAGN). Utilizing DT for real-time state prediction, the architecture enables more adaptive and efficient resource utilization. We formulate the resource allocation problem as a mixed integer nonlinear programming (MINLP) optimization, termed Joint UAV Trajectory, IIoT devices Association, Task Offloading, and Resource Allocation (JUTIA-TORA). To solve this problem in dynamic conditions, we transform it into a Markov decision process and utilize a deep deterministic policy gradient (DDPG) algorithm. Our simulation results indicate that compared to existing actor-critic (AC), full-offload, and greedy algorithms, the proposed DDPG-based solution achieves the highest energy saving while adhering to the constraints of task delay and edge computing capabilities.
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2024.3420876