In order to get the adaptive bandwidth of mean shift to make the tumor segmentation of brain magnetic resonance imaging (MRI) to be more accurate, we in this paper present an advanced mean shift method. Firstly, we made use of the space characteristics of brain image to eliminate the impact on segmentation of skull; and then, based on the characteristics of spatial agglomeration of different tissues of brain (includes tumor), we applied edge points to get the optimal initial mean value and the respectively adaptive bandwidth, in order to improve the accuracy of tumor segmentation. The results of experiment showed that, contrast to the fixed bandwidth mean shift method, the method in this paper could segment the tumor more accurately.
In clinical diagnosis of brain tumors, accurate segmentation based on multimodal magnetic resonance imaging (MRI) is essential for determining tumor type, extent, and spatial boundaries. However, differences in imaging mechanisms, information emphasis, and feature distributions among multimodal MRI data have posed significant challenges for precise tumor modeling and fusion-based segmentation. In recent years, fusion neural networks have provided effective strategies for integrating multimodal information and have become a major research focus in multimodal brain tumor segmentation. This review systematically summarized relevant studies on fusion neural networks for multimodal brain tumor segmentation published since 2019. First, the fundamental concepts of multimodal data fusion and model fusion were introduced. Then, existing methods were categorized into three types according to fusion levels: prediction fusion models, feature fusion models, and stage fusion models, and their structural characteristics and segmentation performance were comparatively analyzed. Finally, current limitations were discussed, and potential development trends of fusion neural networks for multimodal MRI brain tumor segmentation were summarized. This review aims to provide references for the design and optimization of future multimodal brain tumor segmentation models.
The dramatically increasing high-resolution medical images provide a great deal of useful information for cancer diagnosis, and play an essential role in assisting radiologists by offering more objective decisions. In order to utilize the information accurately and efficiently, researchers are focusing on computer-aided diagnosis (CAD) in cancer imaging. In recent years, deep learning as a state-of-the-art machine learning technique has contributed to a great progress in this field. This review covers the reports about deep learning based CAD systems in cancer imaging. We found that deep learning has outperformed conventional machine learning techniques in both tumor segmentation and classification, and that the technique may bring about a breakthrough in CAD of cancer with great prospect in the future clinical practice.
To realize the accurate positioning and quantitative volume measurement of tumor in head and neck tumor CT images, we proposed a level set method based on augmented gradient. With the introduction of gradient information in the edge indicator function, our proposed level set model is adaptive to different intensity variation, and achieves accurate tumor segmentation. The segmentation result has been used to calculate tumor volume. In large volume tumor segmentation, the proposed level set method can reduce manual intervention and enhance the segmentation accuracy. Tumor volume calculation results are close to the gold standard. From the experiment results, the augmented gradient based level set method has achieved accurate head and neck tumor segmentation. It can provide useful information to computer aided diagnosis.