A literature review of artificial intelligence (AI) for medical image segmentation: from AI and explainable AI to trustworthy AI
View abstract on PubMed
Summary
This summary is machine-generated.Explainable AI (XAI) and trustworthy AI (TAI) are advancing medical image segmentation. TAI offers a more reliable and safer approach for clinical applications, improving patient outcomes.
Area Of Science
- Medical image analysis
- Artificial intelligence in healthcare
- Computer-assisted diagnosis
Background
- Traditional medical image segmentation is labor-intensive and subjective.
- Conventional AI improves efficiency but lacks transparency and predictability.
- Explainable AI (XAI) and Trustworthy AI (TAI) address AI limitations in clinical settings.
Purpose Of The Study
- To review the evolution of AI in medical image segmentation.
- To highlight the development and impact of XAI and TAI.
- To examine how XAI and TAI enhance AI systems for clinical use.
Main Methods
- Literature synthesis of traditional, AI-based, XAI, and TAI segmentation methods.
- Analysis of XAI principles for transparency and interpretability.
- Examination of TAI's role in improving AI reliability, safety, and accountability.
Main Results
- XAI enhances AI transparency but faces challenges in safety and robustness.
- TAI builds upon XAI, offering a more reliable framework for medical image segmentation.
- TAI integrates XAI principles with enhanced safety and dependability for clinical settings.
Conclusions
- TAI presents a promising future for medical image segmentation.
- TAI offers improved reliability and safety over conventional AI.
- TAI can lead to better clinical outcomes and advance medical image processing.

