To address the issue of a large number of network parameters and substantial floating-point operations in deep learning networks applied to image segmentation for cardiac magnetic resonance imaging (MRI), this paper proposes a lightweight dilated parallel convolution U-Net (DPU-Net) to decrease the quantity of network parameters and the number of floating-point operations. Additionally, a multi-scale adaptation vector knowledge distillation (MAVKD) training strategy is employed to extract latent knowledge from the teacher network, thereby enhancing the segmentation accuracy of DPU-Net. The proposed network adopts a distinctive way of convolutional channel variation to reduce the number of parameters and combines with residual blocks and dilated convolutions to alleviate the gradient explosion problem and spatial information loss that might be caused by the reduction of parameters. The research findings indicate that this network has achieved considerable improvements in reducing the number of parameters and enhancing the efficiency of floating-point operations. When applying this network to the public dataset of the automatic cardiac diagnosis challenge (ACDC), the dice coefficient reaches 91.26%. The research results validate the effectiveness of the proposed lightweight network and knowledge distillation strategy, providing a reliable lightweighting idea for deep learning in the field of medical image segmentation.
In order to overcome the difficulty in lung parenchymal segmentation due to the factors such as lung disease and bronchial interference, a segmentation algorithm for three-dimensional lung parenchymal is presented based on the integration of surfacelet transform and pulse coupled neural network (PCNN). First, the three-dimensional computed tomography of lungs is decomposed into surfacelet transform domain to obtain multi-scale and multi-directional sub-band information. The edge features are then enhanced by filtering sub-band coefficients using local modified Laplacian operator. Second, surfacelet inverse transform is implemented and the reconstructed image is fed back to the input of PCNN. Finally, iteration process of the PCNN is carried out to obtain final segmentation result. The proposed algorithm is validated on the samples of public dataset. The experimental results demonstrate that the proposed algorithm has superior performance over that of the three-dimensional surfacelet transform edge detection algorithm, the three-dimensional region growing algorithm, and the three-dimensional U-NET algorithm. It can effectively suppress the interference coming from lung lesions and bronchial, and obtain a complete structure of lung parenchyma.
To address the challenges faced by current brain midline segmentation techniques, such as insufficient accuracy and poor segmentation continuity, this paper proposes a deep learning network model based on a two-stage framework. On the first stage of the model, prior knowledge of the feature consistency of adjacent brain midline slices under normal and pathological conditions is utilized. Associated midline slices are selected through slice similarity analysis, and a novel feature weighting strategy is adopted to collaboratively fuse the overall change characteristics and spatial information of these associated slices, thereby enhancing the feature representation of the brain midline in the intracranial region. On the second stage, the optimal path search strategy for the brain midline is employed based on the network output probability map, which effectively addresses the problem of discontinuous midline segmentation. The method proposed in this paper achieved satisfactory results on the CQ500 dataset provided by the Center for Advanced Research in Imaging, Neurosciences and Genomics, New Delhi, India. The Dice similarity coefficient (DSC), Hausdorff distance (HD), average symmetric surface distance (ASSD), and normalized surface Dice (NSD) were 67.38 ± 10.49, 24.22 ± 24.84, 1.33 ± 1.83, and 0.82 ± 0.09, respectively. The experimental results demonstrate that the proposed method can fully utilize the prior knowledge of medical images to effectively achieve accurate segmentation of the brain midline, providing valuable assistance for subsequent identification of the brain midline by clinicians.
Aiming at the problems of low accuracy and large difference of segmentation boundary distance in anterior cruciate ligament (ACL) image segmentation of knee joint, this paper proposes an ACL image segmentation model by fusing dilated convolution and residual hybrid attention U-shaped network (DRH-UNet). The proposed model builds upon the U-shaped network (U-Net) by incorporating dilated convolutions to expand the receptive field, enabling a better understanding of the contextual relationships within the image. Additionally, a residual hybrid attention block is designed in the skip connections to enhance the expression of critical features in key regions and reduce the semantic gap, thereby improving the representation capability for the ACL area. This study constructs an enhanced annotated ACL dataset based on the publicly available Magnetic Resonance Imaging Network (MRNet) dataset. The proposed method is validated on this dataset, and the experimental results demonstrate that the DRH-UNet model achieves a Dice similarity coefficient (DSC) of (88.01±1.57)% and a Hausdorff distance (HD) of 5.16±0.85, outperforming other ACL segmentation methods. The proposed approach further enhances the segmentation accuracy of ACL, providing valuable assistance for subsequent clinical diagnosis by physicians.
In computer-aided medical diagnosis, obtaining labeled medical image data is expensive, while there is a high demand for model interpretability. However, most deep learning models currently require a large amount of data and lack interpretability. To address these challenges, this paper proposes a novel data augmentation method for medical image segmentation. The uniqueness and advantages of this method lie in the utilization of gradient-weighted class activation mapping to extract data efficient features, which are then fused with the original image. Subsequently, a new channel weight feature extractor is constructed to learn the weights between different channels. This approach achieves non-destructive data augmentation effects, enhancing the model's performance, data efficiency, and interpretability. Applying the method of this paper to the Hyper-Kvasir dataset, the intersection over union (IoU) and Dice of the U-net were improved, respectively; and on the ISIC-Archive dataset, the IoU and Dice of the DeepLabV3+ were also improved respectively. Furthermore, even when the training data is reduced to 70 %, the proposed method can still achieve performance that is 95 % of that achieved with the entire dataset, indicating its good data efficiency. Moreover, the data-efficient features used in the method have interpretable information built-in, which enhances the interpretability of the model. The method has excellent universality, is plug-and-play, applicable to various segmentation methods, and does not require modification of the network structure, thus it is easy to integrate into existing medical image segmentation method, enhancing the convenience of future research and applications.
This paper presents a kind of automatic segmentation method for white blood cell based on HSI corrected space information fusion. Firstly, the original cell image is transformed to HSI colour space conversion. Because the transformation formulas of H component piecewise function was discontinuous, the uniformity of uniform visual cytoplasm area in the original image was lead to become lower in this channel. We then modified formulas, and then fetched information of nucleus, cytoplasm, red blood cells and background region according to distribution characteristics of the H, S and I-channel, using the theory and method of information fusion to build fusion imageⅠand fusion imageⅡ, which only contained cytoplasm and a small amount of interference, and fetched nucleus and cytoplasm respectively. Finally, we marked the nucleus and cytoplasm region and obtained the final result of segmentation. The simulation results showed that the new algorithm of image segmentation for white blood cell had high accuracy, robustness and universality.
Most current medical image segmentation models are primarily built upon the U-shaped network (U-Net) architecture, which has certain limitations in capturing both global contextual information and fine-grained details. To address this issue, this paper proposes a novel U-shaped network model, termed the Multi-View U-Net (MUNet), which integrates self-attention and multi-view attention mechanisms. Specifically, a newly designed multi-view attention module is introduced to aggregate semantic features from different perspectives, thereby enhancing the representation of fine details in images. Additionally, the MUNet model leverages a self-attention encoding block to extract global image features, and by fusing global and local features, it improves segmentation performance. Experimental results demonstrate that the proposed model achieves superior segmentation performance in coronary artery image segmentation tasks, significantly outperforming existing models. By incorporating self-attention and multi-view attention mechanisms, this study provides a novel and efficient modeling approach for medical image segmentation, contributing to the advancement of intelligent medical image analysis.
The diagnosis of pancreatic cancer is very important. The main method of diagnosis is based on pathological analysis of microscopic image of Pap smear slide. The accurate segmentation and classification of images are two important phases of the analysis. In this paper, we proposed a new automatic segmentation and classification method for microscopic images of pancreas. For the segmentation phase, firstly multi-features Mean-shift clustering algorithm (MFMS) was applied to localize regions of nuclei. Then, chain splitting model (CSM) containing flexible mathematical morphology and curvature scale space corner detection method was applied to split overlapped cells for better accuracy and robustness. For classification phase, 4 shape-based features and 138 textural features based on color spaces of cell nuclei were extracted. In order to achieve optimal feature set and classify different cells, chain-like agent genetic algorithm (CAGA) combined with support vector machine (SVM) was proposed. The proposed method was tested on 15 cytology images containing 461 cell nuclei. Experimental results showed that the proposed method could automatically segment and classify different types of microscopic images of pancreatic cell and had effective segmentation and classification results. The mean accuracy of segmentation is 93.46%±7.24%. The classification performance of normal and malignant cells can achieve 96.55%±0.99% for accuracy, 96.10%±3.08% for sensitivity and 96.80%±1.48% for specificity.
In view of the evaluation of fundus image segmentation, a new evaluation method was proposed to make up insufficiency of the traditional evaluation method which only considers the overlap of pixels and neglects topology structure of the retinal vessel. Mathematical morphology and thinning algorithm were used to obtain the retinal vascular topology structure. Then three features of retinal vessel, including mutual information, correlation coefficient and ratio of nodes, were calculated. The features of the thinned images taken as topology structure of blood vessel were used to evaluate retinal image segmentation. The manually-labeled images and their eroded ones of STARE database were used in the experiment. The result showed that these features, including mutual information, correlation coefficient and ratio of nodes, could be used to evaluate the segmentation quality of retinal vessel on fundus image through topology structure, and the algorithm was simple. The method is of significance to the supplement of traditional segmentation evaluation of retinal vessel on fundus image.
With the change of medical diagnosis and treatment mode, the quality of medical image directly affects the diagnosis and treatment of the disease for doctors. Therefore, realization of intelligent image quality control by computer will have a greater auxiliary effect on the radiographer’s filming work. In this paper, the research methods and applications of image segmentation model and image classification model in the field of deep learning and traditional image processing algorithm applied to medical image quality evaluation are described. The results demonstrate that deep learning algorithm is more accurate and efficient than the traditional image processing algorithm in the effective training of medical image big data, which explains the broad application prospect of deep learning in the medical field. This paper developed a set of intelligent quality control system for auxiliary filming, and successfully applied it to the Radiology Department of West China Hospital and other city and county hospitals, which effectively verified the feasibility and stability of the quality control system.