Colorectal cancer (CRC) is a common malignant tumor that seriously threatens human health. CRC presents a formidable challenge in terms of accurate identification due to its indistinct boundaries. With the widespread adoption of convolutional neural networks (CNNs) in image processing, leveraging CNNs for automatic classification and segmentation holds immense potential for enhancing the efficiency of colorectal cancer recognition and reducing treatment costs. This paper explores the imperative necessity for applying CNNs in clinical diagnosis of CRC. It provides an elaborate overview on research advancements pertaining to CNNs and their improved models in CRC classification and segmentation. Furthermore, this work summarizes the ideas and common methods for optimizing network performance and discusses the challenges faced by CNNs as well as future development trends in their application towards CRC classification and segmentation, thereby promoting their utilization within clinical diagnosis.
Retinopathy of prematurity (ROP) is a major cause of vision loss and blindness among premature infants. Timely screening, diagnosis, and intervention can effectively prevent the deterioration of ROP. However, there are several challenges in ROP diagnosis globally, including high subjectivity, low screening efficiency, regional disparities in screening coverage, and severe shortage of pediatric ophthalmologists. The application of artificial intelligence (AI) as an assistive tool for diagnosis or an automated method for ROP diagnosis can improve the efficiency and objectivity of ROP diagnosis, expand screening coverage, and enable automated screening and quantified diagnostic results. In the global environment that emphasizes the development and application of medical imaging AI, developing more accurate diagnostic networks, exploring more effective AI-assisted diagnosis methods, and enhancing the interpretability of AI-assisted diagnosis, can accelerate the improvement of AI policies of ROP and the implementation of AI products, promoting the development of ROP diagnosis and treatment.
Aiming at the problems of missing important features, inconspicuous details and unclear textures in the fusion of multimodal medical images, this paper proposes a method of computed tomography (CT) image and magnetic resonance imaging (MRI) image fusion using generative adversarial network (GAN) and convolutional neural network (CNN) under image enhancement. The generator aimed at high-frequency feature images and used double discriminators to target the fusion images after inverse transform; Then high-frequency feature images were fused by trained GAN model, and low-frequency feature images were fused by CNN pre-training model based on transfer learning. Experimental results showed that, compared with the current advanced fusion algorithm, the proposed method had more abundant texture details and clearer contour edge information in subjective representation. In the evaluation of objective indicators, QAB/F, information entropy (IE), spatial frequency (SF), structural similarity (SSIM), mutual information (MI) and visual information fidelity for fusion (VIFF) were 2.0%, 6.3%, 7.0%, 5.5%, 9.0% and 3.3% higher than the best test results, respectively. The fused image can be effectively applied to medical diagnosis to further improve the diagnostic efficiency.
To address issues such as loss of detailed information, blurred target boundaries, and unclear structural hierarchy in medical image fusion, this paper proposes an adaptive feature medical image fusion network based on a full-scale diffusion model. First, a region-level feature map is generated using a kernel-based saliency map to enhance local features and boundary details. Then, a full-scale diffusion feature extraction network is employed for global feature extraction, alongside a multi-scale denoising U-shaped network designed to fully capture cross-layer information. A multi-scale feature integration module is introduced to reinforce texture details and structural information extracted by the encoder. Finally, an adaptive fusion scheme is applied to progressively fuse region-level features, global features, and source images layer by layer, enhancing the preservation of detail information. To validate the effectiveness of the proposed method, this paper validates the proposed model on the publicly available Harvard dataset and an abdominal dataset. By comparing with nine other representative image fusion methods, the proposed approach achieved improvements across seven evaluation metrics. The results demonstrate that the proposed method effectively extracts both global and local features of medical images, enhances texture details and target boundary clarity, and generates fusion image with high contrast and rich information, providing more reliable support for subsequent clinical diagnosis.
Medical image segmentation based on deep learning has become a powerful tool in the field of medical image processing. Due to the special nature of medical images, image segmentation algorithms based on deep learning face problems such as sample imbalance, edge blur, false positive, false negative, etc. In view of these problems, researchers mostly improve the network structure, but rarely improve from the unstructured aspect. The loss function is an important part of the segmentation method based on deep learning. The improvement of the loss function can improve the segmentation effect of the network from the root, and the loss function is independent of the network structure, which can be used in various network models and segmentation tasks in plug and play. Starting from the difficulties in medical image segmentation, this paper first introduces the loss function and improvement strategies to solve the problems of sample imbalance, edge blur, false positive and false negative. Then the difficulties encountered in the improvement of the current loss function are analyzed. Finally, the future research directions are prospected. This paper provides a reference for the reasonable selection, improvement or innovation of loss function, and guides the direction for the follow-up research of loss function.
To address the challenges faced by current brain midline segmentation techniques, such as insufficient accuracy and poor segmentation continuity, this paper proposes a deep learning network model based on a two-stage framework. On the first stage of the model, prior knowledge of the feature consistency of adjacent brain midline slices under normal and pathological conditions is utilized. Associated midline slices are selected through slice similarity analysis, and a novel feature weighting strategy is adopted to collaboratively fuse the overall change characteristics and spatial information of these associated slices, thereby enhancing the feature representation of the brain midline in the intracranial region. On the second stage, the optimal path search strategy for the brain midline is employed based on the network output probability map, which effectively addresses the problem of discontinuous midline segmentation. The method proposed in this paper achieved satisfactory results on the CQ500 dataset provided by the Center for Advanced Research in Imaging, Neurosciences and Genomics, New Delhi, India. The Dice similarity coefficient (DSC), Hausdorff distance (HD), average symmetric surface distance (ASSD), and normalized surface Dice (NSD) were 67.38 ± 10.49, 24.22 ± 24.84, 1.33 ± 1.83, and 0.82 ± 0.09, respectively. The experimental results demonstrate that the proposed method can fully utilize the prior knowledge of medical images to effectively achieve accurate segmentation of the brain midline, providing valuable assistance for subsequent identification of the brain midline by clinicians.
Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.
Computer-aided diagnosis (CAD) systems play a very important role in modern medical diagnosis and treatment systems, but their performance is limited by training samples. However, the training samples are affected by factors such as imaging cost, labeling cost and involving patient privacy, resulting in insufficient diversity of training images and difficulty in data obtaining. Therefore, how to efficiently and cost-effectively augment existing medical image datasets has become a research hotspot. In this paper, the research progress on medical image dataset expansion methods is reviewed based on relevant literatures at home and abroad. First, the expansion methods based on geometric transformation and generative adversarial networks are compared and analyzed, and then improvement of the augmentation methods based on generative adversarial networks are emphasized. Finally, some urgent problems in the field of medical image dataset expansion are discussed and the future development trend is prospected.
Intelligent medical image segmentation methods have been rapidly developed and applied, while a significant challenge is domain shift. That is, the segmentation performance degrades due to distribution differences between the source domain and the target domain. This paper proposed an unsupervised end-to-end domain adaptation medical image segmentation method based on the generative adversarial network (GAN). A network training and adjustment model was designed, including segmentation and discriminant networks. In the segmentation network, the residual module was used as the basic module to increase feature reusability and reduce model optimization difficulty. Further, it learned cross-domain features at the image feature level with the help of the discriminant network and a combination of segmentation loss with adversarial loss. The discriminant network took the convolutional neural network and used the labels from the source domain, to distinguish whether the segmentation result of the generated network is from the source domain or the target domain. The whole training process was unsupervised. The proposed method was tested with experiments on a public dataset of knee magnetic resonance (MR) images and the clinical dataset from our cooperative hospital. With our method, the mean Dice similarity coefficient (DSC) of segmentation results increased by 2.52% and 6.10% to the classical feature level and image level domain adaptive method. The proposed method effectively improves the domain adaptive ability of the segmentation method, significantly improves the segmentation accuracy of the tibia and femur, and can better solve the domain transfer problem in MR image segmentation.
Aiming at the problems of low accuracy and large difference of segmentation boundary distance in anterior cruciate ligament (ACL) image segmentation of knee joint, this paper proposes an ACL image segmentation model by fusing dilated convolution and residual hybrid attention U-shaped network (DRH-UNet). The proposed model builds upon the U-shaped network (U-Net) by incorporating dilated convolutions to expand the receptive field, enabling a better understanding of the contextual relationships within the image. Additionally, a residual hybrid attention block is designed in the skip connections to enhance the expression of critical features in key regions and reduce the semantic gap, thereby improving the representation capability for the ACL area. This study constructs an enhanced annotated ACL dataset based on the publicly available Magnetic Resonance Imaging Network (MRNet) dataset. The proposed method is validated on this dataset, and the experimental results demonstrate that the DRH-UNet model achieves a Dice similarity coefficient (DSC) of (88.01±1.57)% and a Hausdorff distance (HD) of 5.16±0.85, outperforming other ACL segmentation methods. The proposed approach further enhances the segmentation accuracy of ACL, providing valuable assistance for subsequent clinical diagnosis by physicians.