Objective To identify the heart sounds of aortic stenosis by deep learning model based on DenseNet121 architecture, and to explore its application potential in clinical screening aortic stenosis. Methods We prospectively collected heart sounds and clinical data of patients with aortic stenosis in Tianjin Chest Hospital, from June 2021 to February 2022. The collected heart sound data were used to train, verify and test a deep learning model. We evaluated the performance of the model by drawing receiver operating characteristic curve and precision-recall curve. Results A total of 100 patients including 11 asymptomatic patients were included. There were 50 aortic stenosis patients with 30 males and 20 females at an average age of 68.18±10.63 years in an aortic stenosis group (stenosis group). And 50 patients without aortic valve disease were in a negative group, including 26 males and 24 females at an average age of 45.98±12.51 years. The model had an excellent ability to distinguish heart sound data collected from patients with aortic stenosis in clinical settings: accuracy at 91.67%, sensitivity at 90.00%, specificity at 92.50%, and area under receiver operating characteristic curve was 0.917. Conclusion The model of heart sound diagnosis of aortic stenosis based on deep learning has excellent application prospects in clinical screening, which can provide a new idea for the early identification of patients with aortic stenosis.
ObjectiveTo propose a lung artery segmentation method that integrates shape and position prior knowledge, aiming to solve the issues of inaccurate segmentation caused by the high similarity and small size differences between the lung arteries and surrounding tissues in CT images. MethodsBased on the three-dimensional U-Net network architecture and relying on the PARSE 2022 database image data, shape and position prior knowledge was introduced to design feature extraction and fusion strategies to enhance the ability of lung artery segmentation. The data of the patients were divided into three groups: a training set, a validation set, and a test set. The performance metrics for evaluating the model included Dice Similarity Coefficient (DSC), sensitivity, accuracy, and Hausdorff distance (HD95). ResultsThe study included lung artery imaging data from 203 patients, including 100 patients in the training set, 30 patients in the validation set, and 73 patients in the test set. Through the backbone network, a rough segmentation of the lung arteries was performed to obtain a complete vascular structure; the branch network integrating shape and position information was used to extract features of small pulmonary arteries, reducing interference from the pulmonary artery trunk and left and right pulmonary arteries. Experimental results showed that the segmentation model based on shape and position prior knowledge had a higher DSC (82.81%±3.20% vs. 80.47%±3.17% vs. 80.36%±3.43%), sensitivity (85.30%±8.04% vs. 80.95%±6.89% vs. 82.82%±7.29%), and accuracy (81.63%±7.53% vs. 81.19%±8.35% vs. 79.36%±8.98%) compared to traditional three-dimensional U-Net and V-Net methods. HD95 could reach (9.52±4.29) mm, which was 6.05 mm shorter than traditional methods, showing excellent performance in segmentation boundaries. ConclusionThe lung artery segmentation method based on shape and position prior knowledge can achieve precise segmentation of lung artery vessels and has potential application value in tasks such as bronchoscopy or percutaneous puncture surgery navigation.
Clinically, non-contrastive computed tomography (NCCT) is used to quickly diagnose the type and area of ??stroke, and the Alberta stroke program early computer tomography score (ASPECTS) is used to guide the next treatment. However, in the early stage of acute ischemic stroke (AIS), it’s difficult to distinguish the mild cerebral infarction on NCCT with the naked eye, and there is no obvious boundary between brain regions, which makes clinical ASPECTS difficult to conduct. The method based on machine learning and deep learning can help physicians quickly and accurately identify cerebral infarction areas, segment brain areas, and operate ASPECTS quantitative scoring, which is of great significance for improving the inconsistency in clinical ASPECTS. This article describes current challenges in the field of AIS ASPECTS, and then summarizes the application of computer-aided technology in ASPECTS from two aspects including machine learning and deep learning. Finally, this article summarizes and prospects the research direction of AIS-assisted assessment, and proposes that the computer-aided system based on multi-modal images is of great value to improve the comprehensiveness and accuracy of AIS assessment, which has the potential to open up a new research field for AIS-assisted assessment.
Ultrasonic examination is a common method in thyroid examination, and the results are mainly composed of thyroid ultrasound images and text reports. Implementation of cross modal retrieval method of images and text reports can provide great convenience for doctors and patients, but currently there is no retrieval method to correlate thyroid ultrasound images with text reports. This paper proposes a cross-modal method based on the deep learning and improved cross-modal generative adversarial network: ①the weight sharing constraints between the fully connection layers used to construct the public representation space in the original network are changed to cosine similarity constraints, so that the network can better learn the common representation of different modal data; ②the fully connection layer is added before the cross-modal discriminator to merge the full connection layer of image and text in the original network with weight sharing. Semantic regularization is realized on the basis of inheriting the advantages of the original network weight sharing. The experimental results show that the mean average precision of cross modal retrieval method for thyroid ultrasound image and text report in this paper can reach 0.508, which is significantly higher than the traditional cross-modal method, providing a new method for cross-modal retrieval of thyroid ultrasound image and text report.
ObjectiveTo study the application of artificial intelligence based on neural network in breast cancer screening and diagnosis, and to summarize its current situation and clinical application value.MethodThe combined studies of neural network and artificial intelligence in the directions of breast mammography, breast ultrasound, breast magnetic resonance, and breast pathology diagnosis in CNKI and PubMed database were reviewed.ResultsPublic databases of mammography, such as Digital Database for Screening Mammography (DDSM), provided raw materials for the research of neural network in the field of mammography. Mammography was the most widely used data for screening and diagnosis of breast diseases by neural network. In the field of mammography and color doppler ultrasound, neural network could segment, measure, and analyze the characteristics, judge the benign or malignant, and issue a structured report. The application of neural network in the field of breast ultrasound focused on the diagnosis and treatment of benign and malignant breast diseases. Samsung Madison Group taken the lead in grafting research results into ultrasound instruments. Breast MRI had a lot of high-throughput information, which had became the breakthrough point for the joint study of artificial neural network and imaging omics. Pathological images had more data information to be measured, and quantitative analysis of data was the advantage of neural network. The combination of the two kinds of methods could significantly improve the diagnosis time of pathologists.ConclusionsTo study the application of artificial intelligence in breast cancer screening and diagnosis is to analyze the application of neural network in breast imaging and pathology. At present, artificial intelligence screening can be used as a physician assistant and an objective diagnostic reference assistant, to improve the diagnosis of breast disease. With the development of medical image histology and neural network, the application of artificial intelligence in medical field can be extended to surgical method design, efficacy evaluation, prognosis analysis, and so on.
Lung diseases such as lung cancer and COVID-19 seriously endanger human health and life safety, so early screening and diagnosis are particularly important. computed tomography (CT) technology is one of the important ways to screen lung diseases, among which lung parenchyma segmentation based on CT images is the key step in screening lung diseases, and high-quality lung parenchyma segmentation can effectively improve the level of early diagnosis and treatment of lung diseases. Automatic, fast and accurate segmentation of lung parenchyma based on CT images can effectively compensate for the shortcomings of low efficiency and strong subjectivity of manual segmentation, and has become one of the research hotspots in this field. In this paper, the research progress in lung parenchyma segmentation is reviewed based on the related literatures published at domestic and abroad in recent years. The traditional machine learning methods and deep learning methods are compared and analyzed, and the research progress of improving the network structure of deep learning model is emphatically introduced. Some unsolved problems in lung parenchyma segmentation were discussed, and the development prospect was prospected, providing reference for researchers in related fields.
The dramatically increasing high-resolution medical images provide a great deal of useful information for cancer diagnosis, and play an essential role in assisting radiologists by offering more objective decisions. In order to utilize the information accurately and efficiently, researchers are focusing on computer-aided diagnosis (CAD) in cancer imaging. In recent years, deep learning as a state-of-the-art machine learning technique has contributed to a great progress in this field. This review covers the reports about deep learning based CAD systems in cancer imaging. We found that deep learning has outperformed conventional machine learning techniques in both tumor segmentation and classification, and that the technique may bring about a breakthrough in CAD of cancer with great prospect in the future clinical practice.
Objective To review the progress of artificial intelligence (AI) and radiomics in the study of abdominal aortic aneurysm (AAA). Method The literatures related to AI, radiomics and AAA research in recent years were collected and summarized in detail. Results AI and radiomics influenced AAA research and clinical decisions in terms of feature extraction, risk prediction, patient management, simulation of stent-graft deployment, and data mining. Conclusion The application of AI and radiomics provides new ideas for AAA research and clinical decisions, and is expected to suggest personalized treatment and follow-up protocols to guide clinical practice, aiming to achieve precision medicine of AAA.
ObjectiveTo summarize the application status of artificial intelligence (AI) in the diagnosis and treatment of gastrointestinal tumors using image deep learning, as well as its application prospect. MethodLiteratures on AI in the field of gastrointestinal tumors in recent years were reviewed and summarized.ResultsAI had developed rapidly in the medical field. The gastrointestinal endoscopy, imaging examination, and pathological diagnosis assisted by AI technology could assist doctors to make more accurate diagnosis opinions, and make the diagnosis and treatment of gastrointestinal tumors develop towards a more accurate and efficient direction. However, the application of AI in the medical field had just begun, and it still needed to be popularized for a long time.ConclusionThe gastrointestinal endoscopy system, imaging examination system, and pathological diagnosis assisted by AI technology all show high specificity and sensitivity, which obviously reflects its high efficiency and accuracy.
Objective To develop a neural network architecture based on deep learning to assist knee CT images automatic segmentation, and validate its accuracy. Methods A knee CT scans database was established, and the bony structure was manually annotated. A deep learning neural network architecture was developed independently, and the labeled database was used to train and test the neural network. Metrics of Dice coefficient, average surface distance (ASD), and Hausdorff distance (HD) were calculated to evaluate the accuracy of the neural network. The time of automatic segmentation and manual segmentation was compared. Five orthopedic experts were invited to score the automatic and manual segmentation results using Likert scale and the scores of the two methods were compared. Results The automatic segmentation achieved a high accuracy. The Dice coefficient, ASD, and HD of the femur were 0.953±0.037, (0.076±0.048) mm, and (3.101±0.726) mm, respectively; and those of the tibia were 0.950±0.092, (0.083±0.101) mm, and (2.984±0.740) mm, respectively. The time of automatic segmentation was significantly shorter than that of manual segmentation [(2.46±0.45) minutes vs. (64.73±17.07) minutes; t=36.474, P<0.001). The clinical scores of the femur were 4.3±0.3 in the automatic segmentation group and 4.4±0.2 in the manual segmentation group, and the scores of the tibia were 4.5±0.2 and 4.5±0.3, respectively. There was no significant difference between the two groups (t=1.753, P=0.085; t=0.318, P=0.752). Conclusion The automatic segmentation of knee CT images based on deep learning has high accuracy and can achieve rapid segmentation and three-dimensional reconstruction. This method will promote the development of new technology-assisted techniques in total knee arthroplasty.