TCP Congestion Management Using Deep Reinforcement Trained Agent for RED

ABSTRACT Increasing data transmission volumes are causing more frequent and more severe network congestion. In order to handle spikes in network traffic, a substantially bigger buffer has been included into the system. Bufferbloat, which happens when a bigger buffer is implemented, exacerbates netwo...

Full description

Saved in:
Bibliographic Details
Published in:Concurrency and computation
Main Authors: Ali, Majid Hamid, Öztürk, Serkan
Format: Journal Article
Language:English
Published: 14-10-2024
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:ABSTRACT Increasing data transmission volumes are causing more frequent and more severe network congestion. In order to handle spikes in network traffic, a substantially bigger buffer has been included into the system. Bufferbloat, which happens when a bigger buffer is implemented, exacerbates network congestion. Using the transfer control protocol (TCP) congestion management strategy with active queue management (AQM) can fix this issue. As congestion increases, it becomes increasingly difficult to forecast and fine‐tune dynamic AQM/TCP systems in order to achieve acceptable performance. To shed new light on the AQM system, we plan to use deep reinforcement learning (DRL) techniques. It is possible that AQM can learn about the appropriate drop policy the same way people do when using a model‐free technique like DRL‐AQM. After training in a simple network scenario, DRL‐AQM is able to recognize complex patterns in the data traffic model and apply them to improve performance in a wide variety of scenarios. Offline training precedes deployment in our approach. In many cases, the model does not require any further parameter tweaks after training. Even in the most complicated networks, AQM algorithms have proven to be effective, regardless of the network's complexity. Minimizing buffer capacity use is an important goal of DRL‐AQM. It automatically and continually adjusts to changes in network connectivity.
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.8300