A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods
View abstract on PubMed
Summary
This summary is machine-generated.This study introduces a new explainable artificial intelligence (AI) method for medical image analysis. It enhances transparency in deep learning models using integrated statistical, visual, and rule-based explanations.
Area Of Science
- Medical Imaging
- Artificial Intelligence
- Deep Learning
Background
- Deep learning models excel in healthcare data analysis but lack transparency due to their "black-box" nature.
- Existing explainable AI (XAI) methods offer limited interpretability through visualization or rule-based systems alone.
- Interpreting AI decisions is crucial for high-stakes medical applications.
Purpose Of The Study
- To develop a novel XAI method for medical image analysis that integrates statistical, visual, and rule-based explanations.
- To enhance the transparency and interpretability of deep learning models in medical image classification.
- To provide clinicians with deeper insights into AI-driven diagnostic processes.
Main Methods
- A custom Mobilenetv2 model extracts deep features from medical images.
- A two-step feature selection (zero-based filtering and mutual importance selection) refines extracted features.
- Decision tree and RuleFit models generate human-readable rules, complemented by a novel statistical feature map overlay visualization (mean, skewness, entropy).
Main Results
- The proposed XAI method was validated across five diverse medical imaging datasets (COVID-19, breast cancer, brain tumors, lung/colon cancer, glaucoma).
- The integrated approach provided localized and quantifiable visual explanations, enhancing model transparency.
- Results were confirmed by medical experts, indicating practical utility.
Conclusions
- The novel XAI method significantly improves the interpretability of deep learning models in medical image classification.
- The integration of statistical, visual, and rule-based explanations offers a more comprehensive understanding of AI decisions.
- This approach holds promise for increasing trust and adoption of AI in clinical settings.

