1. <div id="8sgz1"><ol id="8sgz1"></ol></div>

        <em id="8sgz1"><label id="8sgz1"></label></em>
      2. <em id="8sgz1"><label id="8sgz1"></label></em>
        <em id="8sgz1"></em>
        <div id="8sgz1"><ol id="8sgz1"><mark id="8sgz1"></mark></ol></div>

        <button id="8sgz1"></button>
        west china medical publishers
        Keyword
        • Title
        • Author
        • Keyword
        • Abstract
        Advance search
        Advance search

        Search

        find Keyword "Deep learning" 70 results
        • Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis

          Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.

          Release date:2023-10-20 04:48 Export PDF Favorites Scan
        • Fetal electrocardiogram signal extraction based on multi-scale residual shrinkage U-Net

          In the extraction of fetal electrocardiogram (ECG) signal, due to the unicity of the scale of the U-Net same-level convolution encoder, the size and shape difference of the ECG characteristic wave between mother and fetus are ignored, and the time information of ECG signals is not used in the threshold learning process of the encoder’s residual shrinkage module. In this paper, a method of extracting fetal ECG signal based on multi-scale residual shrinkage U-Net model is proposed. First, the Inception and time domain attention were introduced into the residual shrinkage module to enhance the multi-scale feature extraction ability of the same level convolution encoder and the utilization of the time domain information of fetal ECG signal. In order to maintain more local details of ECG waveform, the maximum pooling in U-Net was replaced by Softpool. Finally, the decoder composed of the residual module and up-sampling gradually generated fetal ECG signals. In this paper, clinical ECG signals were used for experiments. The final results showed that compared with other fetal ECG extraction algorithms, the method proposed in this paper could extract clearer fetal ECG signals. The sensitivity, positive predictive value, and F1 scores in the 2013 competition data set reached 93.33%, 99.36%, and 96.09%, respectively, indicating that this method can effectively extract fetal ECG signals and has certain application values for perinatal fetal health monitoring.

          Release date:2024-06-21 05:13 Export PDF Favorites Scan
        • Epilepsy detection and analysis method for specific patient based on data augmentation and deep learning

          In recent years, epileptic seizure detection based on electroencephalogram (EEG) has attracted the widespread attention of the academic. However, it is difficult to collect data from epileptic seizure, and it is easy to cause over fitting phenomenon under the condition of few training data. In order to solve this problem, this paper took the CHB-MIT epilepsy EEG dataset from Boston Children's Hospital as the research object, and applied wavelet transform for data augmentation by setting different wavelet transform scale factors. In addition, by combining deep learning, ensemble learning, transfer learning and other methods, an epilepsy detection method with high accuracy for specific epilepsy patients was proposed under the condition of insufficient learning samples. In test, the wavelet transform scale factors 2, 4 and 8 were set for experimental comparison and verification. When the wavelet scale factor was 8, the average accuracy, average sensitivity and average specificity was 95.47%, 93.89% and 96.48%, respectively. Through comparative experiments with recent relevant literatures, the advantages of the proposed method were verified. Our results might provide reference for the clinical application of epilepsy detection.

          Release date:2022-06-28 04:35 Export PDF Favorites Scan
        • A review of deep learning methods for non-contact heart rate measurement based on facial videos

          Heart rate is a crucial indicator of human health with significant physiological importance. Traditional contact methods for measuring heart rate, such as electrocardiograph or wristbands, may not always meet the need for convenient health monitoring. Remote photoplethysmography (rPPG) provides a non-contact method for measuring heart rate and other physiological indicators by analyzing blood volume pulse signals. This approach is non-invasive, does not require direct contact, and allows for long-term healthcare monitoring. Deep learning has emerged as a powerful tool for processing complex image and video data, and has been increasingly employed to extract heart rate signals remotely. This article reviewed the latest research advancements in rPPG-based heart rate measurement using deep learning, summarized available public datasets, and explored future research directions and potential advancements in non-contact heart rate measurement.

          Release date: Export PDF Favorites Scan
        • The current applicating state of neural network-based electroencephalogram diagnosis of Alzheimer’s disease

          The electroencephalogram (EEG) signal is a general reflection of the neurophysiological activity of the brain, which has the advantages of being safe, efficient, real-time and dynamic. With the development and advancement of machine learning research, automatic diagnosis of Alzheimer’s diseases based on deep learning is becoming a research hotspot. Started from feedforward neural networks, this paper compared and analysed the structural properties of neural network models such as recurrent neural networks, convolutional neural networks and deep belief networks and their performance in the diagnosis of Alzheimer’s disease. It also discussed the possible challenges and research trends of this research in the future, expecting to provide a valuable reference for the clinical application of neural networks in the EEG diagnosis of Alzheimer’s disease.

          Release date:2023-02-24 06:14 Export PDF Favorites Scan
        • Establishment and test of intelligent classification method of thoracolumbar fractures based on machine vision

          Objective To develop a deep learning system for CT images to assist in the diagnosis of thoracolumbar fractures and analyze the feasibility of its clinical application. Methods Collected from West China Hospital of Sichuan University from January 2019 to March 2020, a total of 1256 CT images of thoracolumbar fractures were annotated with a unified standard through the Imaging LabelImg system. All CT images were classified according to the AO Spine thoracolumbar spine injury classification. The deep learning system in diagnosing ABC fracture types was optimized using 1039 CT images for training and validation, of which 1004 were used as the training set and 35 as the validation set; the rest 217 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. The deep learning system in subtyping A was optimized using 581 CT images for training and validation, of which 556 were used as the training set and 25 as the validation set; the rest 104 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. Results The accuracy and Kappa coefficient of the deep learning system in diagnosing ABC fracture types were 89.4% and 0.849 (P<0.001), respectively. The accuracy and Kappa coefficient of subtyping A were 87.5% and 0.817 (P<0.001), respectively. Conclusions The classification accuracy of the deep learning system for thoracolumbar fractures is high. This approach can be used to assist in the intelligent diagnosis of CT images of thoracolumbar fractures and improve the current manual and complex diagnostic process.

          Release date:2021-11-25 03:04 Export PDF Favorites Scan
        • A survey on the application of convolutional neural networks in the diagnosis of occupational pneumoconiosis

          Pneumoconiosis ranks first among the newly-emerged occupational diseases reported annually in China, and imaging diagnosis is still one of the main clinical diagnostic methods. However, manual reading of films requires high level of doctors, and it is difficult to discriminate the staged diagnosis of pneumoconiosis imaging, and due to the influence of uneven distribution of medical resources and other factors, it is easy to lead to misdiagnosis and omission of diagnosis in primary healthcare institutions. Computer-aided diagnosis system can realize rapid screening of pneumoconiosis in order to assist clinicians in identification and diagnosis, and improve diagnostic efficacy. As an important branch of deep learning, convolutional neural network (CNN) is good at dealing with various visual tasks such as image segmentation, image classification, target detection and so on because of its characteristics of local association and weight sharing, and has been widely used in the field of computer-aided diagnosis of pneumoconiosis in recent years. This paper was categorized into three parts according to the main applications of CNNs (VGG, U-Net, ResNet, DenseNet, CheXNet, Inception-V3, and ShuffleNet) in the imaging diagnosis of pneumoconiosis, including CNNs in pneumoconiosis screening diagnosis, CNNs in staging diagnosis of pneumoconiosis, and CNNs in segmentation of pneumoconiosis foci to conduct a literature review. It aims to summarize the methods, advantages and disadvantages, and optimization ideas of CNN applied to the images of pneumoconiosis, and to provide a reference for the research direction of further development of computer-aided diagnosis of pneumoconiosis.

          Release date: Export PDF Favorites Scan
        • Efficacy and safety of computer-aided detection(CADe) in colonoscopy for colorectal neoplasia detection: a meta-analysis

          ObjectiveTo systematically evaluate the efficacy and safety of computer-aided detection (CADe) and conventional colonoscopy in identifying colorectal adenomas and polyps. MethodsThe PubMed, Embase, Cochrane Library, Web of Science, WanFang Data, VIP, and CNKI databases were electronically searched to collect randomized controlled trials (RCTs) comparing the effectiveness and safety of CADe assisted colonoscopy and conventional colonoscopy in detecting colorectal tumors from 2014 to April 2023. Two reviewers independently screened the literature, extracted data, and evaluated the risk of bias of the included literature. Meta-analysis was performed by RevMan 5.3 software. ResultsA total of 9 RCTs were included, with a total of 6 393 patients. Compared with conventional colonoscopy, the CADe system significantly improved the adenoma detection rate (ADR) (RR=1.22, 95%CI 1.10 to 1.35, P<0.01) and polyp detection rate (PDR) (RR=1.19, 95%CI 1.04 to 1.36, P=0.01). It also reduced the missed diagnosis rate (AMR) of adenomas (RR=0.48, 95%CI 0.34 to 0.67, P<0.01) and the missed diagnosis rate (PMR) of polyps (RR=0.39, 95%CI 0.25 to 0.59, P<0.01). The PDR of proximal polyps significantly increased, while the PDR of ≤5 mm polyps slightly increased, but the PDR of >10mm and pedunculated polyps significantly decreased. The AMR of the cecum, transverse colon, descending colon, and sigmoid colon was significantly reduced. There was no statistically significant difference in the withdrawal time between the two groups. Conclusion The CADe system can increase the detection rate of adenomas and polyps, and reduce the missed diagnosis rate. The detection rate of polyps is related to their location, size, and shape, while the missed diagnosis rate of adenomas is related to their location.

          Release date:2024-11-12 03:38 Export PDF Favorites Scan
        • Research on classification of benign and malignant lung nodules based on three-dimensional multi-view squeeze-and-excitation convolutional neural network

          Lung cancer is the most threatening tumor disease to human health. Early detection is crucial to improve the survival rate and recovery rate of lung cancer patients. Existing methods use the two-dimensional multi-view framework to learn lung nodules features and simply integrate multi-view features to achieve the classification of benign and malignant lung nodules. However, these methods suffer from the problems of not capturing the spatial features effectively and ignoring the variability of multi-views. Therefore, this paper proposes a three-dimensional (3D) multi-view convolutional neural network (MVCNN) framework. To further solve the problem of different views in the multi-view model, a 3D multi-view squeeze-and-excitation convolution neural network (MVSECNN) model is constructed by introducing the squeeze-and-excitation (SE) module in the feature fusion stage. Finally, statistical methods are used to analyze model predictions and doctor annotations. In the independent test set, the classification accuracy and sensitivity of the model were 96.04% and 98.59% respectively, which were higher than other state-of-the-art methods. The consistency score between the predictions of the model and the pathological diagnosis results was 0.948, which is significantly higher than that between the doctor annotations and the pathological diagnosis results. The methods presented in this paper can effectively learn the spatial heterogeneity of lung nodules and solve the problem of multi-view differences. At the same time, the classification of benign and malignant lung nodules can be achieved, which is of great significance for assisting doctors in clinical diagnosis.

          Release date:2022-08-22 03:12 Export PDF Favorites Scan
        • Recurrence prediction of gastric cancer based on multi-resolution feature fusion and context information

          Pathological images of gastric cancer serve as the gold standard for diagnosing this malignancy. However, the recurrence prediction task often encounters challenges such as insignificant morphological features of the lesions, insufficient fusion of multi-resolution features, and inability to leverage contextual information effectively. To address these issues, a three-stage recurrence prediction method based on pathological images of gastric cancer is proposed. In the first stage, the self-supervised learning framework SimCLR was adopted to train low-resolution patch images, aiming to diminish the interdependence among diverse tissue images and yield decoupled enhanced features. In the second stage, the obtained low-resolution enhanced features were fused with the corresponding high-resolution unenhanced features to achieve feature complementation across multiple resolutions. In the third stage, to address the position encoding difficulty caused by the large difference in the number of patch images, we performed position encoding based on multi-scale local neighborhoods and employed self-attention mechanism to obtain features with contextual information. The resulting contextual features were further combined with the local features extracted by the convolutional neural network. The evaluation results on clinically collected data showed that, compared with the best performance of traditional methods, the proposed network provided the best accuracy and area under curve (AUC), which were improved by 7.63% and 4.51%, respectively. These results have effectively validated the usefulness of this method in predicting gastric cancer recurrence.

          Release date:2024-10-22 02:39 Export PDF Favorites Scan
        7 pages Previous 1 2 3 ... 7 Next

        Format

        Content

          1. <div id="8sgz1"><ol id="8sgz1"></ol></div>

            <em id="8sgz1"><label id="8sgz1"></label></em>
          2. <em id="8sgz1"><label id="8sgz1"></label></em>
            <em id="8sgz1"></em>
            <div id="8sgz1"><ol id="8sgz1"><mark id="8sgz1"></mark></ol></div>

            <button id="8sgz1"></button>
            欧美人与性动交α欧美精品