Toward Transparent Optimization: A Systematic Review of Explainable AI in Decision-Making Systems
DOI:
https://doi.org/10.29020/nybg.ejpam.v18i4.6707Keywords:
Explainable Artificial Intelligence (XAI), Optimization, MetaheuristicsAbstract
The increasing reliance on artificial intelligence (AI) for high-stakes decision-making has heightened the need for systems that prioritize not only accuracy but also interpretability and transparency. Although optimization techniques—such as metaheuristics, mathematical programming, and reinforcement learning—have significantly propelled the development of intelligent systems, their inherent black-box characteristics often hinder trust, accountability, and effective human-AI interaction. This article presents a comprehensive systematic review of the emerging intersection between explainable AI (XAI) and optimization. We explore how interpretability is being systematically incorporated into optimization-driven decision-making pipelines across a variety of application domains. The study offers a critical analysis and classification of existing research, focusing on the integration of XAI methods (e.g., SHAP, LIME, saliency maps) with optimization strategies (e.g., genetic algorithms, simulated annealing, mixed-integer linear programming, and reinforcement learning-based methods). These integrations are examined across sectors such as healthcare, finance, logistics, and energy systems. A structured taxonomy is introduced to categorize hybrid approaches according to their level of explainability, optimization complexity, and domain specificity. In addition, the review highlights key challenges in the field, including the trade-off between performance and interpretability, the absence of standardized benchmarks, and issues related to model scalability. Finally, we outline promising research directions such as the development of explainable hyper-heuristics, domain-adaptable interpretable solvers, and AI frameworks aligned with regulatory standards. By synthesizing this evolving body of knowledge, the article aims to serve as a foundational resource for researchers and practitioners striving to build transparent, trustworthy, and effective optimization-based AI systems
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Kassem Danach, Wael Hosny Fouad Aly, Abbas Tarhini, Saad Laouadi

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Upon acceptance of an article by the European Journal of Pure and Applied Mathematics, the author(s) retain the copyright to the article. However, by submitting your work, you agree that the article will be published under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). This license allows others to copy, distribute, and adapt your work, provided proper attribution is given to the original author(s) and source. However, the work cannot be used for commercial purposes.
By agreeing to this statement, you acknowledge that:
- You retain full copyright over your work.
- The European Journal of Pure and Applied Mathematics will publish your work under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
- This license allows others to use and share your work for non-commercial purposes, provided they give appropriate credit to the original author(s) and source.