An automatic kriging machine learning method to calibrate meta-heuristic algorithms for solving optimization problems
For years, meta-heuristic algorithms have been widely studied and many improved versions have been developed: from the evolution of the swarm topologies of the Particle Swarm Optimization algorithm, to the using of machine learning to Differential Evolutionary algorithms. However, the tuning of the...
Saved in:
Published in: | Engineering applications of artificial intelligence Vol. 113; p. 104940 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier Ltd
01-08-2022
Elsevier |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | For years, meta-heuristic algorithms have been widely studied and many improved versions have been developed: from the evolution of the swarm topologies of the Particle Swarm Optimization algorithm, to the using of machine learning to Differential Evolutionary algorithms. However, the tuning of the fundamental meta-heuristic parameters has been less studied, but may lead to significant improvements on the convergence accuracy of these algorithms. This paper aims at developing an automated methodology to calibrate the parameters of population-based meta-heuristic algorithms for optimization problems. Based on the kriging estimation of the best combination of parameters, the Automated parameter tuning of Meta-heuristics (AptM) methodology gives the optimal algorithm setup for each considered problem in order to lead to a better convergence accuracy. The proposed AptM methodology is used to tune three different meta-heuristic algorithms, each applied to twelve mathematical unimodal or multimodal objective functions. AptM methodology performance is assessed by comparison of classical setups usually used in the literature. The numerical results show that the AptM methodology allows a significant improvement of the convergence accuracy of meta-heuristics with an average improvement of 62.02%, 69.12% and 64.94% on optimization problems defined in dimensions 10, 30 and 50 respectively. An experimental criterion is defined based on the convergence accuracy of the AptM methodology over the classical setups, assessing the AptM performances. The previous experimental criterion allows to compare the AptM methodology over the base-set. The AptM methodology shows a significant improvement of the algorithms performance on 97.2% of the tested problems. |
---|---|
ISSN: | 0952-1976 1873-6769 |
DOI: | 10.1016/j.engappai.2022.104940 |