Skip to main content
OpenConf small logo

Providing all your submission and review needs
Abstract and paper submission, peer-review, discussion, shepherding, program, proceedings, and much more

Worldwide & Multilingual
OpenConf has powered thousands of events and journals in over 100 countries and more than a dozen languages.

A Systematic Review of Explainability Artificial Intelligence (xai) Techniques For Mitigating Trustworthiness Issues On Medical Image Models

The adoption of Artificial Intelligence (AI) in healthcare, particularly in medical imaging, has shown great potential for enhancing diagnostic accura-cy and efficiency. However, the lack of transparency in AI decision-making brings significant ethical, legal, and trust-related challenges. This systematic review investigates the role of Explainable Artificial Intelligence (XAI) techniques in addressing these issues within medical image classification systems. Using the PRISMA methodology, a total of 1,186 articles were ini-tially identified across four major academic databases. After applying in-clusion and exclusion criteria, 110 relevant papers were selected for in-depth analysis. The review examines the most common trustworthiness challenges, the XAI methods employed to mitigate them, and how these methods have been integrated into clinical workflows. Findings highlight the critical role of XAI in enhancing model interpretability, supporting clinical validation, and facilitating the responsible deployment of AI in healthcare.

Rafael Porcinio
ISEP
Portugal

Goreti Marreiros
ISEP/GECAD
Portugal