A Novel Hybrid XAI Solution for Autonomous Vehicles: Real-Time Interpretability Through LIME-SHAP Integration
View abstract on PubMed
Summary
This summary is machine-generated.This study introduces a hybrid explainable AI (XAI) framework for autonomous vehicles (AVs), combining LIME and SHAP for transparent AI decision-making. The novel approach enhances model interpretability and efficiency for real-time AV applications.
Area Of Science
- Artificial Intelligence
- Computer Vision
- Robotics
Background
- Advancements in autonomous vehicles (AVs) and artificial intelligence (AI) necessitate transparent decision-making processes.
- Existing explainable AI (XAI) methods present tradeoffs between precision, global understanding, and computational efficiency.
- There is a critical need for robust XAI solutions suitable for onboard deployment in safety-critical AV systems.
Purpose Of The Study
- To propose and evaluate a novel hybrid explainable AI (XAI) framework for autonomous vehicles (AVs).
- To combine the strengths of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for enhanced transparency and efficiency.
- To provide a balanced approach for onboard deployment in safety-critical AV applications.
Main Methods
- A hybrid XAI framework integrating LIME and SHAP was developed.
- The framework was evaluated on state-of-the-art models: ResNet-18, ResNet-50, and SegNet-50.
- Performance was assessed using the KITTI dataset, focusing on fidelity, interpretability, and consistency metrics.
Main Results
- The hybrid XAI framework achieved a fidelity rate exceeding 85%, an interpretability factor over 80%, and consistency above 70%.
- Inference times were recorded as 0.28s (ResNet-18), 0.571s (ResNet-50), and 3.889s (SegNet), demonstrating suitability for onboard computation.
- The proposed approach consistently outperformed traditional XAI methods in key performance indicators.
Conclusions
- The hybrid LIME-SHAP framework offers a balanced solution for XAI in AVs, optimizing transparency and computational performance.
- This research provides a strong foundation for deploying explainable AI in safety-critical autonomous driving systems.
- The developed framework facilitates real-time decision-making by addressing the tradeoffs between model precision and interpretability.

