LIU Yusi 1,2,3 , QI Liangce 1,2,3 , DIAO Zhaoheng 1,2,3 , FENG Guanyuan 1,2,3 , LI Yuqin 1,2,3 , JIANG Zhengang 1,2,3
  • 1. School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, P. R. China;
  • 2. Key Laboratory of Medical Imaging Intelligence Technology of Jilin Province, Changchun University of Science and Technology, Changchun 130022, P. R. China;
  • 3. Jilin Cross-regional Science and Technology Innovation Center of Medical Intelligent Technology and precision Diagnosis and Treatment Equipment, Changchun University of Science and Technology, Changchun 130022, P. R. China;
JIANG Zhengang, Email: jiangzhengang@cust.edu.cn
Export PDF Favorites Scan Get Citation

Cross-modal unsupervised domain adaptation (UDA) aims to transfer segmentation models trained on a labeled source modality to an unlabeled target modality. However, existing methods often fail to fully exploit shape priors and intermediate feature representations, resulting in limited generalization ability of the model in cross-modal transfer tasks. To address this challenge, we propose a segmentation model based on shape-aware adaptive weighting (SAWS) that enhance the model's ability to perceive the target area and capture global and local information. Specifically, we design a multi-angle strip-shaped shape perception (MSSP) module that captures shape features from multiple orientations through an angular pooling strategy, improving structural modeling under cross-modal settings. In addition, an adaptive weighted hierarchical contrastive (AWHC) loss is introduced to fully leverage intermediate features and enhance segmentation accuracy for small target structures. The proposed method is evaluated on the multi-modality whole heart segmentation (MMWHS) dataset. Experimental results demonstrate that SAWS achieves superior performance in cross-modal cardiac segmentation tasks, with a Dice score (Dice) of 70.1% and an average symmetric surface distance (ASSD) of 4.0 for the computed tomography (CT)→magnetic resonance imaging (MRI) task, and a Dice of 83.8% and ASSD of 3.7 for the MRI→CT task, outperforming existing state-of-the-art methods. Overall, this study proposes a cross-modal medical image segmentation method with shape-aware, which effectively improves the structure-aware ability and generalization performance of the UDA model.

Copyright ? the editorial department of Journal of Biomedical Engineering of West China Medical Publisher. All rights reserved