Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.
China is one of the countries in the world with the highest rate of esophageal cancer. Early detection, accurate diagnosis, and treatment of esophageal cancer are critical for improving patients’ prognosis and survival. Machine learning technology has become widely used in cancer, which is benefited from the accumulation of medical images and advancement of artificial intelligence technology. Therefore, the learning model, image type, data type and application efficiency of current machine learning technology in esophageal cancer are summarized in this review. The major challenges are identified, and solutions are proposed in medical image machine learning for esophageal cancer. Machine learning's potential future directions in esophageal cancer diagnosis and treatment are discussed, with a focus on the possibility of establishing a link between medical images and molecular mechanisms. The general rules of machine learning application in the medical field are summarized and forecasted on this foundation. By drawing on the advanced achievements of machine learning in other cancers and focusing on interdisciplinary cooperation, esophageal cancer research will be effectively promoted.
The dramatically increasing high-resolution medical images provide a great deal of useful information for cancer diagnosis, and play an essential role in assisting radiologists by offering more objective decisions. In order to utilize the information accurately and efficiently, researchers are focusing on computer-aided diagnosis (CAD) in cancer imaging. In recent years, deep learning as a state-of-the-art machine learning technique has contributed to a great progress in this field. This review covers the reports about deep learning based CAD systems in cancer imaging. We found that deep learning has outperformed conventional machine learning techniques in both tumor segmentation and classification, and that the technique may bring about a breakthrough in CAD of cancer with great prospect in the future clinical practice.
In order to overcome the difficulty in lung parenchymal segmentation due to the factors such as lung disease and bronchial interference, a segmentation algorithm for three-dimensional lung parenchymal is presented based on the integration of surfacelet transform and pulse coupled neural network (PCNN). First, the three-dimensional computed tomography of lungs is decomposed into surfacelet transform domain to obtain multi-scale and multi-directional sub-band information. The edge features are then enhanced by filtering sub-band coefficients using local modified Laplacian operator. Second, surfacelet inverse transform is implemented and the reconstructed image is fed back to the input of PCNN. Finally, iteration process of the PCNN is carried out to obtain final segmentation result. The proposed algorithm is validated on the samples of public dataset. The experimental results demonstrate that the proposed algorithm has superior performance over that of the three-dimensional surfacelet transform edge detection algorithm, the three-dimensional region growing algorithm, and the three-dimensional U-NET algorithm. It can effectively suppress the interference coming from lung lesions and bronchial, and obtain a complete structure of lung parenchyma.
With the development of image-guided surgery and radiotherapy, the demand for medical image registration is stronger and the challenge is greater. In recent years, deep learning, especially deep convolution neural networks, has made excellent achievements in medical image processing, and its research in registration has developed rapidly. In this paper, the research progress of medical image registration based on deep learning at home and abroad is reviewed according to the category of technical methods, which include similarity measurement with an iterative optimization strategy, direct estimation of transform parameters, etc. Then, the challenge of deep learning in medical image registration is analyzed, and the possible solutions and open research are proposed.
To locate the nuclei in hematoxylin-eosin (HE) stained section images more simply, efficiently and accurately, a new method based on distance estimation is proposed in this paper, which shows a new mind on locating the nuclei from a clump image. Different from the mainstream methods, proposed method avoids the operations of searching the combined singles. It can directly locate the nuclei in a full image. Furthermore, when the distance estimation built on the matrix sequence of distance rough estimating (MSDRE) is combined with the fact that a center of a convex region must have the farthest distance to the boundary, it can fix the positions of nuclei quickly and precisely. In addition, a high accuracy and efficiency are achieved by this method in experiments, with the precision of 95.26% and efficiency of 1.54 second per thousand nuclei, which are better than the mainstream methods in recognizing nucleus clump samples. Proposed method increases the efficiency of nuclear location while maintaining the location's accuracy. This can be helpful for the automatic analysis system of HE images by improving the real-time performance and promoting the application of related researches.
The human skeletal muscle drives skeletal movement through contraction. Embedding its functional information into the human morphological framework and constructing a digital twin of skeletal muscle for simulating physical and physiological functions of skeletal muscle are of great significance for the study of "virtual physiological humans". Based on relevant literature both domestically and internationally, this paper firstly summarizes the technical framework for constructing skeletal muscle digital twins, and then provides a review from five aspects including skeletal muscle digital twins modeling technology, skeletal muscle data collection technology, simulation analysis technology, simulation platform and human medical image database. On this basis, it is pointed out that further research is needed in areas such as skeletal muscle model generalization, accuracy improvement, and model coupling. The methods and means of constructing skeletal muscle digital twins summarized in the paper are expected to provide reference for researchers in this field, and the development direction pointed out can serve as the next focus of research.
Objective To develop an automatic diagnostic tool based on deep learning for lumbar spine stability and validate diagnostic accuracy. Methods Preoperative lumbar hyper-flexion and hyper-extension X-ray films were collected from 153 patients with lumbar disease. The following 5 key points were marked by 3 orthopedic surgeons: L4 posteroinferior, anterior inferior angles as well as L5 posterosuperior, anterior superior, and posterior inferior angles. The labeling results of each surgeon were preserved independently, and a total of three sets of labeling results were obtained. A total of 306 lumbar X-ray films were randomly divided into training (n=156), validation (n=50), and test (n=100) sets in a ratio of 3∶1∶2. A new neural network architecture, Swin-PGNet was proposed, which was trained using annotated radiograph images to automatically locate the lumbar vertebral key points and calculate L4, 5 intervertebral Cobb angle and L4 lumbar sliding distance through the predicted key points. The mean error and intra-class correlation coefficient (ICC) were used as an evaluation index, to compare the differences between surgeons’ annotations and Swin-PGNet on the three tasks (key point positioning, Cobb angle measurement, and lumbar sliding distance measurement). Meanwhile, the change of Cobb angle more than 11° was taken as the criterion of lumbar instability, and the lumbar sliding distance more than 3 mm was taken as the criterion of lumbar spondylolisthesis. The accuracy of surgeon annotation and Swin-PGNet in judging lumbar instability was compared. Results ① Key point: The mean error of key point location by Swin-PGNet was (1.407±0.939) mm, and by different surgeons was (3.034±2.612) mm. ② Cobb angle: The mean error of Swin-PGNet was (2.062±1.352)° and the mean error of surgeons was (3.580±2.338)°. There was no significant difference between Swin-PGNet and surgeons (P>0.05), but there was a significant difference between different surgeons (P<0.05). ③ Lumbar sliding distance: The mean error of Swin-PGNet was (1.656±0.878) mm and the mean error of surgeons was (1.884±1.612) mm. There was no significant difference between Swin-PGNet and surgeons and between different surgeons (P>0.05). The accuracy of lumbar instability diagnosed by surgeons and Swin-PGNet was 75.3% and 84.0%, respectively. The accuracy of lumbar spondylolisthesis diagnosed by surgeons and Swin-PGNet was 70.7% and 71.3%, respectively. There was no significant difference between Swin-PGNet and surgeons, as well as between different surgeons (P>0.05). ④ Consistency of lumbar stability diagnosis: The ICC of Cobb angle among different surgeons was 0.913 [95%CI (0.898, 0.934)] (P<0.05), and the ICC of lumbar sliding distance was 0.741 [95%CI (0.729, 0.796)] (P<0.05). The result showed that the annotating of the three surgeons were consistent. The ICC of Cobb angle between Swin-PGNet and surgeons was 0.922 [95%CI (0.891, 0.938)] (P<0.05), and the ICC of lumbar sliding distance was 0.748 [95%CI(0.726, 0.783)] (P<0.05). The result showed that the annotating of Swin-PGNet were consistent with those of surgeons. ConclusionThe automatic diagnostic tool for lumbar instability constructed based on deep learning can realize the automatic identification of lumbar instability and spondylolisthesis accurately and conveniently, which can effectively assist clinical diagnosis.
In recent years, researchers have introduced various methods in many domains into medical image processing so that its effectiveness and efficiency can be improved to some extent. The applications of generative adversarial networks (GAN) in medical image processing are evolving very fast. In this paper, the state of the art in this area has been reviewed. Firstly, the basic concepts of the GAN were introduced. And then, from the perspectives of the medical image denoising, detection, segmentation, synthesis, reconstruction and classification, the applications of the GAN were summarized. Finally, prospects for further research in this area were presented.
Medical image fusion realizes advantage integration of functional images and anatomical images. This article discusses the research progress of multi-model medical image fusion at feature level. We firstly describe the principle of medical image fusion at feature level. Then we analyze and summarize fuzzy sets, rough sets, D-S evidence theory, artificial neural network, principal component analysis and other fusion methods' applications in medical image fusion and get summery. Lastly, we in this article indicate present problems and the research direction of multi-model medical images in the future.