Lung segmentation is the premise of the computer aided diagnosis of lung cancer. The traditional segmentation method based on local low-level features can not get the correct result when a tumor is connected with pleura due to their similar computed tomography (CT) values. Moreover, because the big size of tumor leads to the loss of a large part of lung area, the traditional segmentation methods of lung with juxta-pleural nodule whose diameter is less than 3 cm are not suitable. Acitve shape model (ASM) combined with prior shape and low level features might be appropriate. But the search steps in conventional ASM is an optimization method based on the least square, which is sensitive to outlier marker points, and it makes profile update to the transition area of normal lung tissue and tumor rather than a true lung contour. To solve the problem, we proposed an improved ASM algorithm. Firstly, we identified these outlier marker points by distance, and then gave the different searching functions to the abnormal and normal marker points. And the search processing should be limited in volume of interesting (VOI). We selected 30 lung images with juxta-pleural tumors, and got the overlap rate with the gold standard as 93.6%. The experimental results showed that the improved ASM could get good segmentation results for the lungs with juxta-pleural tumors, and the running time of the algorithm could be tolerated in clinical.
Focus on the inconsistency of the shape, location and size of brain glioma, a dual-channel 3-dimensional (3D) densely connected network is proposed to automatically segment brain glioma tumor on magnetic resonance images. Our method is based on a 3D convolutional neural network frame, and two convolution kernel sizes are adopted in each channel to extract multi-scale features in different scales of receptive fields. Then we construct two densely connected blocks in each pathway for feature learning and transmission. Finally, the concatenation of two pathway features was sent to classification layer to classify central region voxels to segment brain tumor automatically. We train and test our model on open brain tumor segmentation challenge dataset, and we also compared our results with other models. Experimental results show that our algorithm can segment different tumor lesions more accurately. It has important application value in the clinical diagnosis and treatment of brain tumor diseases.
ObjectiveTo systematically summarize recent advancements in the application of artificial intelligence (AI) in key components of radiotherapy (RT), explore the integration of technical innovations with clinical practice, and identify current limitations in real-world implementation. MethodsA comprehensive analysis of representative studies from recent years was conducted, focusing on the technical implementation and clinical effectiveness of AI in image reconstruction, automatic delineation of target volumes and organs at risk, intelligent treatment planning, and prediction of RT-related toxicities. Particular attention was given to deep learning models, multimodal data integration, and their roles in enhancing decision-making processes. ResultsAI-based low-dose image enhancement techniques had significantly improved image quality. Automated segmentation methods had increased the efficiency and consistency of contouring. Both knowledge-driven and data-driven planning systems had addressed the limitations of traditional experience-dependent approaches, contributing to higher quality and reproducibility in treatment plans. Additionally, toxicity prediction models that incorporated multimodal data enabled more accurate, personalized risk assessment, supporting safer and more effective individualized RT. ConclusionsRT is a fundamental modality in cancer treatment. However, achieving precise tumor ablation while minimizing damage to surrounding healthy tissues remains a significant challenge. AI has demonstrated considerable value across multiple technical stages of RT, enhancing precision, efficiency, and personalization. Nevertheless, challenges such as limited model generalizability, lack of data standardization, and insufficient clinical validation persist. Future work should emphasize the alignment of algorithmic development with clinical demands to facilitate the standardized, reliable, and practical application of AI in RT.
[Abstract]Automatic and accurate segmentation of lung parenchyma is essential for assisted diagnosis of lung cancer. In recent years, researchers in the field of deep learning have proposed a number of improved lung parenchyma segmentation methods based on U-Net. However, the existing segmentation methods ignore the complementary fusion of semantic information in the feature map between different layers and fail to distinguish the importance of different spaces and channels in the feature map. To solve this problem, this paper proposes the double scale parallel attention (DSPA) network (DSPA-Net) architecture, and introduces the DSPA module and the atrous spatial pyramid pooling (ASPP) module in the “encoder-decoder” structure. Among them, the DSPA module aggregates the semantic information of feature maps of different levels while obtaining accurate space and channel information of feature map with the help of cooperative attention (CA). The ASPP module uses multiple parallel convolution kernels with different void rates to obtain feature maps containing multi-scale information under different receptive fields. The two modules address multi-scale information processing in feature maps of different levels and in feature maps of the same level, respectively. We conducted experimental verification on the Kaggle competition dataset. The experimental results prove that the network architecture has obvious advantages compared with the current mainstream segmentation network. The values of dice similarity coefficient (DSC) and intersection on union (IoU) reached 0.972 ± 0.002 and 0.945 ± 0.004, respectively. This paper achieves automatic and accurate segmentation of lung parenchyma and provides a reference for the application of attentional mechanisms and multi-scale information in the field of lung parenchyma segmentation.
The segmentation of organs at risk is an important part of radiotherapy. The current method of manual segmentation depends on the knowledge and experience of physicians, which is very time-consuming and difficult to ensure the accuracy, consistency and repeatability. Therefore, a deep convolutional neural network (DCNN) is proposed for the automatic and accurate segmentation of head and neck organs at risk. The data of 496 patients with nasopharyngeal carcinoma were reviewed. Among them, 376 cases were randomly selected for training set, 60 cases for validation set and 60 cases for test set. Using the three-dimensional (3D) U-NET DCNN, combined with two loss functions of Dice Loss and Generalized Dice Loss, the automatic segmentation neural network model for the head and neck organs at risk was trained. The evaluation parameters are Dice similarity coefficient and Jaccard distance. The average Dice Similarity coefficient of the 19 organs at risk was 0.91, and the Jaccard distance was 0.15. The results demonstrate that 3D U-NET DCNN combined with Dice Loss function can be better applied to automatic segmentation of head and neck organs at risk.
To realize the accurate positioning and quantitative volume measurement of tumor in head and neck tumor CT images, we proposed a level set method based on augmented gradient. With the introduction of gradient information in the edge indicator function, our proposed level set model is adaptive to different intensity variation, and achieves accurate tumor segmentation. The segmentation result has been used to calculate tumor volume. In large volume tumor segmentation, the proposed level set method can reduce manual intervention and enhance the segmentation accuracy. Tumor volume calculation results are close to the gold standard. From the experiment results, the augmented gradient based level set method has achieved accurate head and neck tumor segmentation. It can provide useful information to computer aided diagnosis.
This paper presents a kind of automatic segmentation method for white blood cell based on HSI corrected space information fusion. Firstly, the original cell image is transformed to HSI colour space conversion. Because the transformation formulas of H component piecewise function was discontinuous, the uniformity of uniform visual cytoplasm area in the original image was lead to become lower in this channel. We then modified formulas, and then fetched information of nucleus, cytoplasm, red blood cells and background region according to distribution characteristics of the H, S and I-channel, using the theory and method of information fusion to build fusion imageⅠand fusion imageⅡ, which only contained cytoplasm and a small amount of interference, and fetched nucleus and cytoplasm respectively. Finally, we marked the nucleus and cytoplasm region and obtained the final result of segmentation. The simulation results showed that the new algorithm of image segmentation for white blood cell had high accuracy, robustness and universality.
In view of the evaluation of fundus image segmentation, a new evaluation method was proposed to make up insufficiency of the traditional evaluation method which only considers the overlap of pixels and neglects topology structure of the retinal vessel. Mathematical morphology and thinning algorithm were used to obtain the retinal vascular topology structure. Then three features of retinal vessel, including mutual information, correlation coefficient and ratio of nodes, were calculated. The features of the thinned images taken as topology structure of blood vessel were used to evaluate retinal image segmentation. The manually-labeled images and their eroded ones of STARE database were used in the experiment. The result showed that these features, including mutual information, correlation coefficient and ratio of nodes, could be used to evaluate the segmentation quality of retinal vessel on fundus image through topology structure, and the algorithm was simple. The method is of significance to the supplement of traditional segmentation evaluation of retinal vessel on fundus image.
The existing retinal vessels segmentation algorithms have various problems that the end of main vessels are easy to break, and the central macula and the optic disc boundary are likely to be mistakenly segmented. To solve the above problems, a novel retinal vessels segmentation algorithm is proposed in this paper. The algorithm merged together vessels contour information and conditional generative adversarial nets. Firstly, non-uniform light removal and principal component analysis were used to process the fundus images. Therefore, it enhanced the contrast between the blood vessels and the background, and obtained the single-scale gray images with rich feature information. Secondly, the dense blocks integrated with the deep separable convolution with offset and squeeze-and-exception (SE) block were applied to the encoder and decoder to alleviate the gradient disappearance or explosion. Simultaneously, the network focused on the feature information of the learning target. Thirdly, the contour loss function was added to improve the identification ability of the blood vessels information and contour information of the network. Finally, experiments were carried out on the DRIVE and STARE datasets respectively. The value of area under the receiver operating characteristic reached 0.982 5 and 0.987 4, respectively, and the accuracy reached 0.967 7 and 0.975 6, respectively. Experimental results show that the algorithm can accurately distinguish contours and blood vessels, and reduce blood vessel rupture. The algorithm has certain application value in the diagnosis of clinical ophthalmic diseases.
Accurate segmentation of pediatric echocardiograms is a challenging task, because significant heart-size changes with age and faster heart rate lead to more blurred boundaries on cardiac ultrasound images compared with adults. To address these problems, a dual decoder network model combining channel attention and scale attention is proposed in this paper. Firstly, an attention-guided decoder with deep supervision strategy is used to obtain attention maps for the ventricular regions. Then, the generated ventricular attention is fed back to multiple layers of the network through skip connections to adjust the feature weights generated by the encoder and highlight the left and right ventricular areas. Finally, a scale attention module and a channel attention module are utilized to enhance the edge features of the left and right ventricles. The experimental results demonstrate that the proposed method in this paper achieves an average Dice coefficient of 90.63% in acquired bilateral ventricular segmentation dataset, which is better than some conventional and state-of-the-art methods in the field of medical image segmentation. More importantly, the method has a more accurate effect in segmenting the edge of the ventricle. The results of this paper can provide a new solution for pediatric echocardiographic bilateral ventricular segmentation and subsequent auxiliary diagnosis of congenital heart disease.