Objective To develop a neural network architecture based on deep learning to assist knee CT images automatic segmentation, and validate its accuracy. Methods A knee CT scans database was established, and the bony structure was manually annotated. A deep learning neural network architecture was developed independently, and the labeled database was used to train and test the neural network. Metrics of Dice coefficient, average surface distance (ASD), and Hausdorff distance (HD) were calculated to evaluate the accuracy of the neural network. The time of automatic segmentation and manual segmentation was compared. Five orthopedic experts were invited to score the automatic and manual segmentation results using Likert scale and the scores of the two methods were compared. Results The automatic segmentation achieved a high accuracy. The Dice coefficient, ASD, and HD of the femur were 0.953±0.037, (0.076±0.048) mm, and (3.101±0.726) mm, respectively; and those of the tibia were 0.950±0.092, (0.083±0.101) mm, and (2.984±0.740) mm, respectively. The time of automatic segmentation was significantly shorter than that of manual segmentation [(2.46±0.45) minutes vs. (64.73±17.07) minutes; t=36.474, P<0.001). The clinical scores of the femur were 4.3±0.3 in the automatic segmentation group and 4.4±0.2 in the manual segmentation group, and the scores of the tibia were 4.5±0.2 and 4.5±0.3, respectively. There was no significant difference between the two groups (t=1.753, P=0.085; t=0.318, P=0.752). Conclusion The automatic segmentation of knee CT images based on deep learning has high accuracy and can achieve rapid segmentation and three-dimensional reconstruction. This method will promote the development of new technology-assisted techniques in total knee arthroplasty.
In view of the evaluation of fundus image segmentation, a new evaluation method was proposed to make up insufficiency of the traditional evaluation method which only considers the overlap of pixels and neglects topology structure of the retinal vessel. Mathematical morphology and thinning algorithm were used to obtain the retinal vascular topology structure. Then three features of retinal vessel, including mutual information, correlation coefficient and ratio of nodes, were calculated. The features of the thinned images taken as topology structure of blood vessel were used to evaluate retinal image segmentation. The manually-labeled images and their eroded ones of STARE database were used in the experiment. The result showed that these features, including mutual information, correlation coefficient and ratio of nodes, could be used to evaluate the segmentation quality of retinal vessel on fundus image through topology structure, and the algorithm was simple. The method is of significance to the supplement of traditional segmentation evaluation of retinal vessel on fundus image.
In order to overcome the difficulty in lung parenchymal segmentation due to the factors such as lung disease and bronchial interference, a segmentation algorithm for three-dimensional lung parenchymal is presented based on the integration of surfacelet transform and pulse coupled neural network (PCNN). First, the three-dimensional computed tomography of lungs is decomposed into surfacelet transform domain to obtain multi-scale and multi-directional sub-band information. The edge features are then enhanced by filtering sub-band coefficients using local modified Laplacian operator. Second, surfacelet inverse transform is implemented and the reconstructed image is fed back to the input of PCNN. Finally, iteration process of the PCNN is carried out to obtain final segmentation result. The proposed algorithm is validated on the samples of public dataset. The experimental results demonstrate that the proposed algorithm has superior performance over that of the three-dimensional surfacelet transform edge detection algorithm, the three-dimensional region growing algorithm, and the three-dimensional U-NET algorithm. It can effectively suppress the interference coming from lung lesions and bronchial, and obtain a complete structure of lung parenchyma.
Medical image segmentation based on deep learning has become a powerful tool in the field of medical image processing. Due to the special nature of medical images, image segmentation algorithms based on deep learning face problems such as sample imbalance, edge blur, false positive, false negative, etc. In view of these problems, researchers mostly improve the network structure, but rarely improve from the unstructured aspect. The loss function is an important part of the segmentation method based on deep learning. The improvement of the loss function can improve the segmentation effect of the network from the root, and the loss function is independent of the network structure, which can be used in various network models and segmentation tasks in plug and play. Starting from the difficulties in medical image segmentation, this paper first introduces the loss function and improvement strategies to solve the problems of sample imbalance, edge blur, false positive and false negative. Then the difficulties encountered in the improvement of the current loss function are analyzed. Finally, the future research directions are prospected. This paper provides a reference for the reasonable selection, improvement or innovation of loss function, and guides the direction for the follow-up research of loss function.
To address the challenges faced by current brain midline segmentation techniques, such as insufficient accuracy and poor segmentation continuity, this paper proposes a deep learning network model based on a two-stage framework. On the first stage of the model, prior knowledge of the feature consistency of adjacent brain midline slices under normal and pathological conditions is utilized. Associated midline slices are selected through slice similarity analysis, and a novel feature weighting strategy is adopted to collaboratively fuse the overall change characteristics and spatial information of these associated slices, thereby enhancing the feature representation of the brain midline in the intracranial region. On the second stage, the optimal path search strategy for the brain midline is employed based on the network output probability map, which effectively addresses the problem of discontinuous midline segmentation. The method proposed in this paper achieved satisfactory results on the CQ500 dataset provided by the Center for Advanced Research in Imaging, Neurosciences and Genomics, New Delhi, India. The Dice similarity coefficient (DSC), Hausdorff distance (HD), average symmetric surface distance (ASSD), and normalized surface Dice (NSD) were 67.38 ± 10.49, 24.22 ± 24.84, 1.33 ± 1.83, and 0.82 ± 0.09, respectively. The experimental results demonstrate that the proposed method can fully utilize the prior knowledge of medical images to effectively achieve accurate segmentation of the brain midline, providing valuable assistance for subsequent identification of the brain midline by clinicians.
Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.
To address the issue of a large number of network parameters and substantial floating-point operations in deep learning networks applied to image segmentation for cardiac magnetic resonance imaging (MRI), this paper proposes a lightweight dilated parallel convolution U-Net (DPU-Net) to decrease the quantity of network parameters and the number of floating-point operations. Additionally, a multi-scale adaptation vector knowledge distillation (MAVKD) training strategy is employed to extract latent knowledge from the teacher network, thereby enhancing the segmentation accuracy of DPU-Net. The proposed network adopts a distinctive way of convolutional channel variation to reduce the number of parameters and combines with residual blocks and dilated convolutions to alleviate the gradient explosion problem and spatial information loss that might be caused by the reduction of parameters. The research findings indicate that this network has achieved considerable improvements in reducing the number of parameters and enhancing the efficiency of floating-point operations. When applying this network to the public dataset of the automatic cardiac diagnosis challenge (ACDC), the dice coefficient reaches 91.26%. The research results validate the effectiveness of the proposed lightweight network and knowledge distillation strategy, providing a reliable lightweighting idea for deep learning in the field of medical image segmentation.
This paper presents a kind of automatic segmentation method for white blood cell based on HSI corrected space information fusion. Firstly, the original cell image is transformed to HSI colour space conversion. Because the transformation formulas of H component piecewise function was discontinuous, the uniformity of uniform visual cytoplasm area in the original image was lead to become lower in this channel. We then modified formulas, and then fetched information of nucleus, cytoplasm, red blood cells and background region according to distribution characteristics of the H, S and I-channel, using the theory and method of information fusion to build fusion imageⅠand fusion imageⅡ, which only contained cytoplasm and a small amount of interference, and fetched nucleus and cytoplasm respectively. Finally, we marked the nucleus and cytoplasm region and obtained the final result of segmentation. The simulation results showed that the new algorithm of image segmentation for white blood cell had high accuracy, robustness and universality.
Intelligent medical image segmentation methods have been rapidly developed and applied, while a significant challenge is domain shift. That is, the segmentation performance degrades due to distribution differences between the source domain and the target domain. This paper proposed an unsupervised end-to-end domain adaptation medical image segmentation method based on the generative adversarial network (GAN). A network training and adjustment model was designed, including segmentation and discriminant networks. In the segmentation network, the residual module was used as the basic module to increase feature reusability and reduce model optimization difficulty. Further, it learned cross-domain features at the image feature level with the help of the discriminant network and a combination of segmentation loss with adversarial loss. The discriminant network took the convolutional neural network and used the labels from the source domain, to distinguish whether the segmentation result of the generated network is from the source domain or the target domain. The whole training process was unsupervised. The proposed method was tested with experiments on a public dataset of knee magnetic resonance (MR) images and the clinical dataset from our cooperative hospital. With our method, the mean Dice similarity coefficient (DSC) of segmentation results increased by 2.52% and 6.10% to the classical feature level and image level domain adaptive method. The proposed method effectively improves the domain adaptive ability of the segmentation method, significantly improves the segmentation accuracy of the tibia and femur, and can better solve the domain transfer problem in MR image segmentation.
With the change of medical diagnosis and treatment mode, the quality of medical image directly affects the diagnosis and treatment of the disease for doctors. Therefore, realization of intelligent image quality control by computer will have a greater auxiliary effect on the radiographer’s filming work. In this paper, the research methods and applications of image segmentation model and image classification model in the field of deep learning and traditional image processing algorithm applied to medical image quality evaluation are described. The results demonstrate that deep learning algorithm is more accurate and efficient than the traditional image processing algorithm in the effective training of medical image big data, which explains the broad application prospect of deep learning in the medical field. This paper developed a set of intelligent quality control system for auxiliary filming, and successfully applied it to the Radiology Department of West China Hospital and other city and county hospitals, which effectively verified the feasibility and stability of the quality control system.