AttCo: Attention-based co-Learning fusion of deep feature representation for medical image segmentation using multimodality
View abstract on PubMed
Summary
This summary is machine-generated.AttCo, a new multimodal network, improves 3D medical image segmentation by fusing features from different imaging types. This attention-based approach enhances accuracy in identifying abnormal tissues for better healthcare outcomes.
Area Of Science
- Medical Imaging
- Computer Vision
- Artificial Intelligence
Background
- Accurate tissue segmentation is vital for disease prediction and treatment planning.
- Current 3D medical image segmentation methods struggle with multimodal data fusion and complex structures.
Purpose Of The Study
- To introduce AttCo, a novel multimodal semantic segmentation network.
- To enhance the learning of complementary information from multiple imaging modalities for 3D segmentation.
Main Methods
- Developed AttCo, an attention-based co-learning fusion network for multimodal 3D semantic segmentation.
- Employed multiple encoder branches for unimodal 3D representation extraction.
- Integrated intra-modality (SEAT) and inter-modality (OSCAT) feature learning for comprehensive fusion.
Main Results
- AttCo significantly outperforms existing methods in Dice score across various datasets.
- The network effectively extracts robust unimodal 3D representations.
- Demonstrated effective exploitation of inter- and intra-modality feature interactions.
Conclusions
- AttCo offers a superior approach to multimodal 3D medical image segmentation.
- The proposed attention-based fusion effectively captures complex feature interactions.
- This method advances clinical analysis through precise abnormal tissue identification.

