Learning-driven Evolutionary Optimization for the Traveling Salesman Problem

Deep Reinforcement Learning (DRL) has showcased remarkable achievements across various domains, such as image recognition and automation. Nevertheless, its potential in the realm of logistics and transportation, particularly in tackling routing challenges, remains mostly untapped. On the contrary, E...

Full description

Saved in:
Bibliographic Details
Published in:2024 10th International Conference on Control, Decision and Information Technologies (CoDIT) pp. 1347 - 1351
Main Authors: Mejri, Imen, Layeb, Safa Bhar, Benslimane, Maryem
Format: Conference Proceeding
Language:English
Published: IEEE 01-07-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep Reinforcement Learning (DRL) has showcased remarkable achievements across various domains, such as image recognition and automation. Nevertheless, its potential in the realm of logistics and transportation, particularly in tackling routing challenges, remains mostly untapped. On the contrary, Evolutionary Algorithms (EA) have enjoyed widespread adoption for solving combinatorial optimization problems. Unexpectedly, the combination of EA and DRL methods for tackling combinatorial optimization problems has not been extensively explored in the current body of literature. Driven by these gaps in research, this study presents a novel method called Evolutionary Reinforcement Learning (ERL) aimed at addressing the Traveling Salesman Problem (TSP). To enhance the policy generated by a deep neural network, we exploit the collaborative potential between the EA and DRL frameworks. Significantly, the weights linked to the actor component are crucial, especially in approaches that are not primarily focused on policy optimization. By harnessing the capabilities of EA, we establish a weight population and seamlessly integrate them into the DRL framework, aiming to substantially improve TSP results. Employing the Genetic Algorithm (GA) as our EA, we introduce a novel ERL-based approach, specifically the ERL-GA. Computational experiments conducted reveal that the ERL-GA outperforms the basic DRL framework in terms of performance.
ISSN:2576-3555
DOI:10.1109/CoDIT62066.2024.10708338