Style mixup enhanced disentanglement learning for unsupervised domain adaptation in medical image segmentation
View abstract on PubMed
Summary
This summary is machine-generated.This study introduces Style Mixup Enhanced Disentanglement Learning (SMEDL) for unsupervised domain adaptation in medical image segmentation. SMEDL improves model generalization and domain-invariant learning without image translation, outperforming existing methods.
Area Of Science
- Medical Image Analysis
- Computer Vision
- Machine Learning
Background
- Unsupervised domain adaptation (UDA) is crucial for medical image segmentation across different modalities.
- Existing UDA methods often rely on image translation, which can compromise semantic consistency and domain-invariant representation.
- There is a need for UDA approaches that enhance generalizability and domain invariance without explicit image translation.
Purpose Of The Study
- To propose a novel method, Style Mixup Enhanced Disentanglement Learning (SMEDL), for UDA in medical image segmentation.
- To improve model generalization and domain-invariant learning capabilities.
- To overcome limitations of existing UDA methods dependent on image translation.
Main Methods
- Employs disentangled style mixup to implicitly generate diverse style-mixed domains in the feature space.
- Introduces pixel-wise consistency regularization for effective style-mixed domains and domain consistency.
- Utilizes dual-level domain-invariant learning: intra-domain contrastive learning and inter-domain adversarial learning.
Main Results
- Demonstrates superior performance on cardiac and brain medical image segmentation tasks.
- Achieves improved domain generalization and domain-invariant learning.
- Outperforms state-of-the-art methods in UDA medical image segmentation.
Conclusions
- SMEDL offers an effective approach for UDA medical image segmentation by avoiding image translation.
- The proposed method enhances model generalization and domain-invariant representation learning.
- SMEDL shows significant potential for improving cross-modality medical image segmentation.

