Medical image fusion realizes advantage integration of functional images and anatomical images. This article discusses the research progress of multi-model medical image fusion at feature level. We firstly describe the principle of medical image fusion at feature level. Then we analyze and summarize fuzzy sets, rough sets, D-S evidence theory, artificial neural network, principal component analysis and other fusion methods' applications in medical image fusion and get summery. Lastly, we in this article indicate present problems and the research direction of multi-model medical images in the future.
Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.
The human skeletal muscle drives skeletal movement through contraction. Embedding its functional information into the human morphological framework and constructing a digital twin of skeletal muscle for simulating physical and physiological functions of skeletal muscle are of great significance for the study of "virtual physiological humans". Based on relevant literature both domestically and internationally, this paper firstly summarizes the technical framework for constructing skeletal muscle digital twins, and then provides a review from five aspects including skeletal muscle digital twins modeling technology, skeletal muscle data collection technology, simulation analysis technology, simulation platform and human medical image database. On this basis, it is pointed out that further research is needed in areas such as skeletal muscle model generalization, accuracy improvement, and model coupling. The methods and means of constructing skeletal muscle digital twins summarized in the paper are expected to provide reference for researchers in this field, and the development direction pointed out can serve as the next focus of research.
Medical image registration is very challenging due to the various imaging modality, image quality, wide inter-patients variability, and intra-patient variability with disease progressing of medical images, with strict requirement for robustness. Inspired by semantic model, especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework, we set up a novel semantic model to match medical images. Since most of medical images have poor contrast, small dynamic range, and involving only intensities and so on, the traditional visual word models do not perform very well. To benefit from the advantages from the relative works, we proposed a novel visual word model named directional visual words, which performs better on medical images. Then we applied this model to do medical registration. In our experiment, the critical anatomical structures were first manually specified by experts. Then we adopted the directional visual word, the strategy of spatial pyramid searching from coarse to fine, and the k-means algorithm to help us locating the positions of the key structures accurately. Sequentially, we shall register corresponding images by the areas around these positions. The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.
Objective To develop an automatic diagnostic tool based on deep learning for lumbar spine stability and validate diagnostic accuracy. Methods Preoperative lumbar hyper-flexion and hyper-extension X-ray films were collected from 153 patients with lumbar disease. The following 5 key points were marked by 3 orthopedic surgeons: L4 posteroinferior, anterior inferior angles as well as L5 posterosuperior, anterior superior, and posterior inferior angles. The labeling results of each surgeon were preserved independently, and a total of three sets of labeling results were obtained. A total of 306 lumbar X-ray films were randomly divided into training (n=156), validation (n=50), and test (n=100) sets in a ratio of 3∶1∶2. A new neural network architecture, Swin-PGNet was proposed, which was trained using annotated radiograph images to automatically locate the lumbar vertebral key points and calculate L4, 5 intervertebral Cobb angle and L4 lumbar sliding distance through the predicted key points. The mean error and intra-class correlation coefficient (ICC) were used as an evaluation index, to compare the differences between surgeons’ annotations and Swin-PGNet on the three tasks (key point positioning, Cobb angle measurement, and lumbar sliding distance measurement). Meanwhile, the change of Cobb angle more than 11° was taken as the criterion of lumbar instability, and the lumbar sliding distance more than 3 mm was taken as the criterion of lumbar spondylolisthesis. The accuracy of surgeon annotation and Swin-PGNet in judging lumbar instability was compared. Results ① Key point: The mean error of key point location by Swin-PGNet was (1.407±0.939) mm, and by different surgeons was (3.034±2.612) mm. ② Cobb angle: The mean error of Swin-PGNet was (2.062±1.352)° and the mean error of surgeons was (3.580±2.338)°. There was no significant difference between Swin-PGNet and surgeons (P>0.05), but there was a significant difference between different surgeons (P<0.05). ③ Lumbar sliding distance: The mean error of Swin-PGNet was (1.656±0.878) mm and the mean error of surgeons was (1.884±1.612) mm. There was no significant difference between Swin-PGNet and surgeons and between different surgeons (P>0.05). The accuracy of lumbar instability diagnosed by surgeons and Swin-PGNet was 75.3% and 84.0%, respectively. The accuracy of lumbar spondylolisthesis diagnosed by surgeons and Swin-PGNet was 70.7% and 71.3%, respectively. There was no significant difference between Swin-PGNet and surgeons, as well as between different surgeons (P>0.05). ④ Consistency of lumbar stability diagnosis: The ICC of Cobb angle among different surgeons was 0.913 [95%CI (0.898, 0.934)] (P<0.05), and the ICC of lumbar sliding distance was 0.741 [95%CI (0.729, 0.796)] (P<0.05). The result showed that the annotating of the three surgeons were consistent. The ICC of Cobb angle between Swin-PGNet and surgeons was 0.922 [95%CI (0.891, 0.938)] (P<0.05), and the ICC of lumbar sliding distance was 0.748 [95%CI(0.726, 0.783)] (P<0.05). The result showed that the annotating of Swin-PGNet were consistent with those of surgeons. ConclusionThe automatic diagnostic tool for lumbar instability constructed based on deep learning can realize the automatic identification of lumbar instability and spondylolisthesis accurately and conveniently, which can effectively assist clinical diagnosis.
Effective medical image enhancement method can not only highlight the interested target and region, but also suppress the background and noise, thus improving the quality of the image and reducing the noise while keeping the original geometric structure, which contributes to easier diagnosis in disease based on the image enhanced. This article carries out research on strengthening methods of subtle structure in medical image nowadays, including images sharpening enhancement, rough sets and fuzzy sets, multi-scale geometrical analysis and differential operator. Finally, some commonly used quantitative evaluation criteria of image detail enhancement are given, and further research directions of fine structure enhancement of medical images are discussed.
To address the issues of difficulty in preserving anatomical structures, low realism of generated images, and loss of high-frequency image information in medical image cross-modal translation, this paper proposes a medical image cross-modal translation method based on diffusion generative adversarial networks. First, an unsupervised translation module is used to convert magnetic resonance imaging (MRI) into pseudo-computed tomography (CT) images. Subsequently, a nonlinear frequency decomposition module is used to extract high-frequency CT images. Finally, the pseudo-CT image is input into the forward process, while the high-frequency CT image as a conditional input is used to guide the reverse process to generate the final CT image. The proposed model is evaluated on the SynthRAD2023 dataset, which is used for CT image generation for radiotherapy planning. The generated brain CT images achieve a Fréchet Inception Distance (FID) score of 33.159 7, a structure similarity index measure (SSIM) of 89.84%, a peak signal-to-noise ratio (PSNR) of 35.596 5 dB, and a mean squared error (MSE) of 17.873 9. The generated pelvic CT images yield an FID score of 33.951 6, a structural similarity index of 91.30%, a PSNR of 34.870 7 dB, and an MSE of 17.465 8. Experimental results show that the proposed model generates highly realistic CT images while preserving anatomical accuracy as much as possible. The transformed CT images can be effectively used in radiotherapy planning, further enhancing diagnostic efficiency.
In recent years, researchers have introduced various methods in many domains into medical image processing so that its effectiveness and efficiency can be improved to some extent. The applications of generative adversarial networks (GAN) in medical image processing are evolving very fast. In this paper, the state of the art in this area has been reviewed. Firstly, the basic concepts of the GAN were introduced. And then, from the perspectives of the medical image denoising, detection, segmentation, synthesis, reconstruction and classification, the applications of the GAN were summarized. Finally, prospects for further research in this area were presented.
Lung cancer has the highest mortality rate among all malignant tumors. The key to reducing lung cancer mortality is the accurate diagnosis of pulmonary nodules in early-stage lung cancer. Computer-aided diagnostic techniques are considered to have potential beyond human experts for accurate diagnosis of early pulmonary nodules. The detection and classification of pulmonary nodules based on deep learning technology can continuously improve the accuracy of diagnosis through self-learning, and is an important means to achieve computer-aided diagnosis. First, we systematically introduced the application of two dimension convolutional neural network (2D-CNN), three dimension convolutional neural network (3D-CNN) and faster regions convolutional neural network (Faster R-CNN) techniques in the detection of pulmonary nodules. Then we introduced the application of 2D-CNN, 3D-CNN, multi-stream multi-scale convolutional neural network (MMCNN), deep convolutional generative adversarial networks (DCGAN) and transfer learning technology in classification of pulmonary nodules. Finally, we conducted a comprehensive comparative analysis of different deep learning methods in the detection and classification of pulmonary nodules.
In order to overcome the difficulty in lung parenchymal segmentation due to the factors such as lung disease and bronchial interference, a segmentation algorithm for three-dimensional lung parenchymal is presented based on the integration of surfacelet transform and pulse coupled neural network (PCNN). First, the three-dimensional computed tomography of lungs is decomposed into surfacelet transform domain to obtain multi-scale and multi-directional sub-band information. The edge features are then enhanced by filtering sub-band coefficients using local modified Laplacian operator. Second, surfacelet inverse transform is implemented and the reconstructed image is fed back to the input of PCNN. Finally, iteration process of the PCNN is carried out to obtain final segmentation result. The proposed algorithm is validated on the samples of public dataset. The experimental results demonstrate that the proposed algorithm has superior performance over that of the three-dimensional surfacelet transform edge detection algorithm, the three-dimensional region growing algorithm, and the three-dimensional U-NET algorithm. It can effectively suppress the interference coming from lung lesions and bronchial, and obtain a complete structure of lung parenchyma.