Evaluating Biases and Quality Issues in Intermodality Image Translation Studies for Neuroradiology: A Systematic Review
View abstract on PubMed
Summary
This summary is machine-generated.Artificial intelligence (AI) for brain image translation faces critical issues hindering clinical use. Collaboration between medical and engineering fields is essential to improve AI model reporting and validation for better clinical application.
Area Of Science
- Medical Imaging
- Artificial Intelligence
- Radiology
Background
- Intermodality image-to-image translation is an AI technique for generating one image modality from another.
- AI models are increasingly used in medical imaging for tasks like brain image translation.
Purpose Of The Study
- To systematically identify and quantify biases and quality issues in AI models for intermodality brain image translation.
- To assess factors preventing the clinical application of these AI models.
Main Methods
- Searched PubMed, Scopus, and IEEE Xplore for AI-based image translation models of radiologic brain images (April 2017-August 2023).
- Evaluated 102 studies for quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and bias using PROBAST.
- Compared adherence to CLAIM and PROBAST criteria between medically-focused and engineering-focused articles.
Main Results
- Median adherence to CLAIM was 69%, and to PROBAST was 38%.
- Medically-focused articles showed higher overall CLAIM adherence (73%) than engineering-focused ones (65%).
- Engineering-focused studies had better model description adherence but lower overall adherence than medical studies.
Conclusions
- Most studies have critical issues preventing clinical application of AI brain image translation.
- Engineering-focused AI studies, while strong in technical description, lag in overall adherence compared to medical studies.
- Improved reporting and collaboration between medical and engineering fields are crucial for advancing clinical AI applications.

