Energy-Efficient and Accelerated Resource Allocation in O-RAN Slicing Using Deep Reinforcement Learning and Transfer Learning
Next Generation Wireless Networks (NGWNs) have two main components: Network Slicing and Open Radio Access Networks (O-RAN). NS is needed to handle various Quality of Services (QoS). O-RAN adopts an open environment for network vendors and Mobile Network Operators (MNOs). In recent years, Deep Reinfo...
Saved in:
Published in: | Cybernetics and information technologies : CIT Vol. 24; no. 3; pp. 132 - 150 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
Sciendo
01-09-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Next Generation Wireless Networks (NGWNs) have two main components: Network Slicing and Open Radio Access Networks (O-RAN). NS is needed to handle various Quality of Services (QoS). O-RAN adopts an open environment for network vendors and Mobile Network Operators (MNOs). In recent years, Deep Reinforcement Learning (DRL) approaches have been proposed to solve some key issues in NGWNs. The primary obstacles preventing the DRL deployment are being slowly converged and unstable. Additionally, these algorithms have enormous carbon emissions that negatively impact climate change. This paper tackles the dynamic allocation problem of O-RAN radio resources for better QoS, faster convergence, stability, lower energy and power consumption, and reduced carbon emissions. Firstly, we develop an agent with a newly designed latency-based reward function and a top-k filtration mechanism for actions. Then, we propose a policy Transfer Learning approach to accelerate agent convergence. We compared our model to another two models. |
---|---|
ISSN: | 1314-4081 1314-4081 |
DOI: | 10.2478/cait-2024-0029 |