ObjectiveTo develop an artificial intelligence based three-dimensional (3D) preoperative planning system (AIHIP) for total hip arthroplasty (THA) and verify its accuracy by preliminary clinical application.MethodsThe CT image database consisting of manually segmented CT image series was built up to train the independently developed deep learning neural network. The deep learning neural network and preoperative planning module were assembled within a visual interactive interface—AIHIP. After that, 60 patients (60 hips) with unilateral primary THA between March 2017 and May 2020 were enrolled and divided into two groups. The AIHIP system was applied in the trial group (n=30) and the traditional acetate templating was applied in the control group (n=30). There was no significant difference in age, gender, operative side, and Association Research Circulation Osseous (ARCO) grading between the two groups (P>0.05). The coincidence rate, preoperative and postoperative leg length discrepancy, the difference of bilateral femoral offsets, the difference of bilateral combined offsets of two groups were compared to evaluate the accuracy and efficiency of the AIHIP system.ResultsThe preoperative plan by the AIHIP system was completely realized in 27 patients (90.0%) of the trial group and the acetate templating was completely realized in 17 patients (56.7%) of the control group for the cup, showing significant difference (P<0.05). The preoperative plan by the AIHIP system was completely realized in 25 patients (83.3%) of the trial group and the acetate templating was completely realized in 16 patients (53.3%) of the control group for the stem, showing significant difference (P<0.05). There was no significant difference in the difference of bilateral femoral offsets, the difference of bilateral combined offsets, and the leg length discrepancy between the two groups before operation (P>0.05). The difference of bilateral combined offsets at immediate after operation was significantly less in the trial group than in the control group (t=?2.070, P=0.044); but there was no significant difference in the difference of bilateral femoral offsets and the leg length discrepancy between the two groups (P>0.05).ConclusionCompared with the traditional 2D preoperative plan, the 3D preoperative plan by the AIHIP system is more accurate and detailed, especially in demonstrating the actual anatomical structures. In this study, the working flow of this artificial intelligent preoperative system was illustrated for the first time and preliminarily applied in THA. However, its potential clinical value needs to be discovered by advanced research.
Human motion recognition (HAR) is the technological base of intelligent medical treatment, sports training, video monitoring and many other fields, and it has been widely concerned by all walks of life. This paper summarized the progress and significance of HAR research, which includes two processes: action capture and action classification based on deep learning. Firstly, the paper introduced in detail three mainstream methods of action capture: video-based, depth camera-based and inertial sensor-based. The commonly used action data sets were also listed. Secondly, the realization of HAR based on deep learning was described in two aspects, including automatic feature extraction and multi-modal feature fusion. The realization of training monitoring and simulative training with HAR in orthopedic rehabilitation training was also introduced. Finally, it discussed precise motion capture and multi-modal feature fusion of HAR, as well as the key points and difficulties of HAR application in orthopedic rehabilitation training. This article summarized the above contents to quickly guide researchers to understand the current status of HAR research and its application in orthopedic rehabilitation training.
Histopathology is still the golden standard for the diagnosis of clinical diseases. Whole slide image (WSI) can make up for the shortcomings of traditional glass slices, such as easy damage, difficult retrieval and poor diagnostic repeatability, but it also brings huge workload. Artificial intelligence (AI) assisted pathologist's WSI analysis can solve the problem of low efficiency and improve the consistency of diagnosis. Among them, the convolution neural network (CNN) algorithm is the most widely used. This article aims to review the reported application of CNN in WSI image analysis, summarizes the development trend of CNN in the field of pathology and makes a prospect.
Objective To investigating the safety and accuracy of artificial intelligence (AI) assisted automatic planning of pedicle screws parallel to sagittal plane for C1. Methods The subjects who completed cervical CT scan in Zigong Fourth People’s Hospital btween January 2020 and December 2023 were selected. The subjects who completed cervical CT scan were randomly divided into two groups using a random number table method. Among them, 80% were used as the training model (training group), and 20% were used as the validation model (validation group). The original cervical CT data of the training group were imported into ITK-SNAP software to mark the feature points. Four feature points were selected. In order to obtain the weighted function model of the four feature points, training group were trained with the spatial key point location algorithm. pedicle trajectory based on the four key points obtained. Finally, the algorithm was compiled to form a visual interface, and imported into the verification group of annular vertebral CT data to calculate the pedicle screw trajectory. Results A total of 500 patients were included. Among them, there were 400 cases in the training group and 100 cases in the validation group. The average positioning error of spatial key points is (0.47±0.16) mm. The average distance between the planned pedicle screw center line and the internal edge of the pedicle was (2.86±0.12) mm. Pedicle screw placement parallel to the sagittal plane and 3D display can be safely performed for the C1 pedicle that is large enough to accommodate a 3.5 mm diameter screw without cortical breakthrough. Conclusions For pedicle screw planning parallel to the sagittal plane in C1, training based on the spatial positioning algorithm of anterior and posterior tubercles and bilateral tangential points can obtain a safe and accurate pedicle screw trajectory. It provides theoretical basis for orthopedic robot automatic screw placement. For vertebral bodies with narrow or deformed pedicles, further expansion of the training data is needed to expand the adaptive range and improve the accuracy of the algorithm.
Ultrasonic examination is a common method in thyroid examination, and the results are mainly composed of thyroid ultrasound images and text reports. Implementation of cross modal retrieval method of images and text reports can provide great convenience for doctors and patients, but currently there is no retrieval method to correlate thyroid ultrasound images with text reports. This paper proposes a cross-modal method based on the deep learning and improved cross-modal generative adversarial network: ①the weight sharing constraints between the fully connection layers used to construct the public representation space in the original network are changed to cosine similarity constraints, so that the network can better learn the common representation of different modal data; ②the fully connection layer is added before the cross-modal discriminator to merge the full connection layer of image and text in the original network with weight sharing. Semantic regularization is realized on the basis of inheriting the advantages of the original network weight sharing. The experimental results show that the mean average precision of cross modal retrieval method for thyroid ultrasound image and text report in this paper can reach 0.508, which is significantly higher than the traditional cross-modal method, providing a new method for cross-modal retrieval of thyroid ultrasound image and text report.
Otitis media is one of the common ear diseases, and its accurate diagnosis can prevent the deterioration of conductive hearing loss and avoid the overuse of antibiotics. At present, the diagnosis of otitis media mainly relies on the doctor's visual inspection based on the images fed back by the otoscope equipment. Due to the quality of otoscope equipment pictures and the doctor's diagnosis experience, this subjective examination has a relatively high rate of misdiagnosis. In response to this problem, this paper proposes the use of faster region convolutional neural networks to analyze clinically collected digital otoscope pictures. First, through image data enhancement and preprocessing, the number of samples in the clinical otoscope dataset was expanded. Then, according to the characteristics of the otoscope picture, the convolutional neural network was selected for feature extraction, and the feature pyramid network was added for multi-scale feature extraction to enhance the detection ability. Finally, a faster region convolutional neural network with anchor size optimization and hyperparameter adjustment was used for identification, and the effectiveness of the method was tested through a randomly selected test set. The results showed that the overall recognition accuracy of otoscope pictures in the test samples reached 91.43%. The above studies show that the proposed method effectively improves the accuracy of otoscope picture classification, and is expected to assist clinical diagnosis.
In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.
With the increasing number of electrocardiogram (ECG) data, extensive application requirements of computer-aided ECG analysis have occurred. In the paper, we propose a variety of strategies to improve the performance of clinical ECG classification algorithm based on Lead Convolutional Neural Network (LCNN). Firstly, we obtained two classifiers by using different preprocessing methods and training methods in the study. Then, we applied the multiple output prediction method to both of them independently. Finally, the Bayesian approach was employed to fuse them. Tests conducted using more than 150 000 ECG records showed that the proposed method had an accuracy of 85.04% and the area under receiver operating characteristic curve (AUC) was 0.918 5, which significantly outperforms traditional methods based on feature extraction techniques.
ObjectiveTo summarize the application status of artificial intelligence (AI) in the diagnosis and treatment of gastrointestinal tumors using image deep learning, as well as its application prospect. MethodLiteratures on AI in the field of gastrointestinal tumors in recent years were reviewed and summarized.ResultsAI had developed rapidly in the medical field. The gastrointestinal endoscopy, imaging examination, and pathological diagnosis assisted by AI technology could assist doctors to make more accurate diagnosis opinions, and make the diagnosis and treatment of gastrointestinal tumors develop towards a more accurate and efficient direction. However, the application of AI in the medical field had just begun, and it still needed to be popularized for a long time.ConclusionThe gastrointestinal endoscopy system, imaging examination system, and pathological diagnosis assisted by AI technology all show high specificity and sensitivity, which obviously reflects its high efficiency and accuracy.
The classification of lung tumor with the help of computer-aided diagnosis system is very important for the early diagnosis and treatment of malignant lung tumors. At present, the main research direction of lung tumor classification is the model fusion technology based on deep learning, which classifies the multiple fusion data of lung tumor with the help of radiomics. This paper summarizes the commonly used research algorithms for lung tumor classification, introduces concepts and technologies of machine learning, radiomics, deep learning and multiple data fusion, points out the existing problems and difficulties in the field of lung tumor classification, and looks forward to the development prospect and future research direction of lung tumor classification.