Fast Pareto set approximation for multi-objective flexible job shop scheduling via parallel preference-conditioned graph reinforcement learning

The Multi-Objective Flexible Job Shop Scheduling Problem (MOFJSP) is a complex challenge in manufacturing, requiring balancing multiple, often conflicting objectives. Traditional methods, such as Multi-Objective Evolutionary Algorithms (MOEA), can be time-consuming and unsuitable for real-time appli...

Full description

Saved in:
Bibliographic Details
Published in:Swarm and evolutionary computation Vol. 88; p. 101605
Main Authors: Su, Chupeng, Zhang, Cong, Wang, Chuang, Cen, Weihong, Chen, Gang, Xie, Longhan
Format: Journal Article
Language:English
Published: Elsevier B.V 01-07-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Multi-Objective Flexible Job Shop Scheduling Problem (MOFJSP) is a complex challenge in manufacturing, requiring balancing multiple, often conflicting objectives. Traditional methods, such as Multi-Objective Evolutionary Algorithms (MOEA), can be time-consuming and unsuitable for real-time applications. This paper introduces a novel Graph Reinforcement Learning (GRL) approach, named Preference-Conditioned GRL, which efficiently approximates the Pareto set for MOFJSP in a parallelized manner. By decomposing the MOFJSP into distinct sub-problems based on preferences and leveraging a parallel multi-objective training algorithm, our method efficiently produces high-quality Pareto sets, significantly outperforming MOEA methods in both solution quality and speed, especially for large-scale problems. Extensive experiments demonstrate the superiority of our approach, with remarkable results on large instances, showcasing its potential for real-time scheduling in dynamic manufacturing environments. Notably, for large instances (50 × 20), our approach outperforms MOEA baselines with remarkably shorter computation time (less than 1% of that of MOEA baselines). The robust generalization performance across various instances also highlights the practical value of our method for decision-makers seeking optimized production resource utilization. •We propose a DRL method to solve the Multi-Objective Flexible Job Shop Scheduling Problem.•Its model is Preference-Conditioned, ensuring the resolution of various sub-problems in MOFJSP.•The DRL model is trained end-to-end with novel parallel Evolution Strategies.•A preference parallel inference algorithm is developed to generate the Pareto Set efficiently.•Extensive experiments indicate superiority regarding solution speed and quality.
ISSN:2210-6502
DOI:10.1016/j.swevo.2024.101605