Benchmarking metaheuristic algorithms: a comprehensive review of test functions, real-world problems, and evaluation metrics
Keywords:
Metaheuristics, Benchmarking, Test Functions, CEC, Performance EvaluationAbstract
The proliferation of metaheuristic algorithms for solving complex optimization problems has necessitated robust and standardized benchmarking practices. A fair and comprehensive evaluation is crucial for validating an algorithm's performance, understanding its strengths and weaknesses, and guiding future algorithmic development. This paper provides a comprehensive review of the landscape of benchmarking in metaheuristic optimization. We systematically categorize and detail the primary types of benchmark problems, including the classic set of 23 mathematical functions, real-world engineering problems, specialized CEC benchmark suites, combinatorial optimization benchmarks, and multi-objective problems. For each category, we present the fundamental mathematical formulations and discuss their specific characteristics and challenges. Furthermore, we outline the standard evaluation metrics used to quantify algorithmic performance. Finally, we discuss current challenges in benchmarking, such as the no free lunch theorem, the issue of overfitting to test suites, and the need for more diverse and real-world benchmarks. This review serves as a foundational guide for researchers and practitioners in selecting appropriate benchmarks and conducting rigorous, reproducible evaluations of metaheuristic algorithms.
Published
How to Cite
Issue
Section
Copyright (c) 2025 Authors

This work is licensed under a Creative Commons Attribution 4.0 International License.