Transfer learning-accelerated network slice management for next generation services
The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the traditional radio access network (RAN), introducing Open RAN (O-RAN). This new paradigm is...
Saved in:
Published in: | Computer communications Vol. 228; p. 107937 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier B.V
01-12-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the traditional radio access network (RAN), introducing Open RAN (O-RAN). This new paradigm is based on a virtualized and intelligent RAN architecture. However, with the increased complexity of 5G applications, traditional application-specific placement techniques have reached a bottleneck. Our paper presents a Transfer Learning (TL) augmented Reinforcement Learning (RL) based networking slicing (NS) solution targeting more effective placement and improving downtime for prolonged slice deployments. To achieve this, we propose an approach based on creating a robust and dynamic repository of specialized RL agents and network slices geared towards popular user service types such as eMBB, URLLC, and mMTC. The proposed solution consists of a heuristic-controlled two-module-based ML Engine and a repository. The objective function is formulated to minimize the downtime incurred by the VNFs hosted on the commercial-off-the-shelf (COTS) servers. The performance of the proposed system is evaluated compared to traditional approaches using industry-standard 5G traffic datasets. The evaluation results show that the proposed solution consistently achieves lower downtime than the traditional algorithms. |
---|---|
ISSN: | 0140-3664 |
DOI: | 10.1016/j.comcom.2024.107937 |