Lung cancer is the most threatening tumor disease to human health. Early detection is crucial to improve the survival rate and recovery rate of lung cancer patients. Existing methods use the two-dimensional multi-view framework to learn lung nodules features and simply integrate multi-view features to achieve the classification of benign and malignant lung nodules. However, these methods suffer from the problems of not capturing the spatial features effectively and ignoring the variability of multi-views. Therefore, this paper proposes a three-dimensional (3D) multi-view convolutional neural network (MVCNN) framework. To further solve the problem of different views in the multi-view model, a 3D multi-view squeeze-and-excitation convolution neural network (MVSECNN) model is constructed by introducing the squeeze-and-excitation (SE) module in the feature fusion stage. Finally, statistical methods are used to analyze model predictions and doctor annotations. In the independent test set, the classification accuracy and sensitivity of the model were 96.04% and 98.59% respectively, which were higher than other state-of-the-art methods. The consistency score between the predictions of the model and the pathological diagnosis results was 0.948, which is significantly higher than that between the doctor annotations and the pathological diagnosis results. The methods presented in this paper can effectively learn the spatial heterogeneity of lung nodules and solve the problem of multi-view differences. At the same time, the classification of benign and malignant lung nodules can be achieved, which is of great significance for assisting doctors in clinical diagnosis.
Glioma is a primary brain tumor with high incidence rate. High-grade gliomas (HGG) are those with the highest degree of malignancy and the lowest degree of survival. Surgical resection and postoperative adjuvant chemoradiotherapy are often used in clinical treatment, so accurate segmentation of tumor-related areas is of great significance for the treatment of patients. In order to improve the segmentation accuracy of HGG, this paper proposes a multi-modal glioma semantic segmentation network with multi-scale feature extraction and multi-attention fusion mechanism. The main contributions are, (1) Multi-scale residual structures were used to extract features from multi-modal gliomas magnetic resonance imaging (MRI); (2) Two types of attention modules were used for features aggregating in channel and spatial; (3) In order to improve the segmentation performance of the whole network, the branch classifier was constructed using ensemble learning strategy to adjust and correct the classification results of the backbone classifier. The experimental results showed that the Dice coefficient values of the proposed segmentation method in this article were 0.909 7, 0.877 3 and 0.839 6 for whole tumor, tumor core and enhanced tumor respectively, and the segmentation results had good boundary continuity in the three-dimensional direction. Therefore, the proposed semantic segmentation network has good segmentation performance for high-grade gliomas lesions.
Speech imagery is an emerging brain-computer interface (BCI) paradigm with potential to provide effective communication for individuals with speech impairments. This study designed a Chinese speech imagery paradigm using three clinically relevant words—“Help me”, “Sit up” and “Turn over”—and collected electroencephalography (EEG) data from 15 healthy subjects. Based on the data, a Channel Attention Multi-Scale Convolutional Neural Network (CAM-Net) decoding algorithm was proposed, which combined multi-scale temporal convolutions with asymmetric spatial convolutions to extract multidimensional EEG features, and incorporated a channel attention mechanism along with a bidirectional long short-term memory network to perform channel weighting and capture temporal dependencies. Experimental results showed that CAM-Net achieved a classification accuracy of 48.54% in the three-class task, outperforming baseline models such as EEGNet and Deep ConvNet, and reached a highest accuracy of 64.17% in the binary classification between “Sit up” and “Turn over”. This work provides a promising approach for future Chinese speech imagery BCI research and applications.
Accurate segmentation of ground glass nodule (GGN) is important in clinical. But it is a tough work to segment the GGN, as the GGN in the computed tomography images show blur boundary, irregular shape, and uneven intensity. This paper aims to segment GGN by proposing a fully convolutional residual network, i.e., residual network based on atrous spatial pyramid pooling structure and attention mechanism (ResAANet). The network uses atrous spatial pyramid pooling (ASPP) structure to expand the feature map receptive field and extract more sufficient features, and utilizes attention mechanism, residual connection, long skip connection to fully retain sensitive features, which is extracted by the convolutional layer. First, we employ 565 GGN provided by Shanghai Chest Hospital to train and validate ResAANet, so as to obtain a stable model. Then, two groups of data selected from clinical examinations (84 GGN) and lung image database consortium (LIDC) dataset (145 GGN) were employed to validate and evaluate the performance of the proposed method. Finally, we apply the best threshold method to remove false positive regions and obtain optimized results. The average dice similarity coefficient (DSC) of the proposed algorithm on the clinical dataset and LIDC dataset reached 83.46%, 83.26% respectively, the average Jaccard index (IoU) reached 72.39%, 71.56% respectively, and the speed of segmentation reached 0.1 seconds per image. Comparing with other reported methods, our new method could segment GGN accurately, quickly and robustly. It could provide doctors with important information such as nodule size or density, which assist doctors in subsequent diagnosis and treatment.
The conventional fault diagnosis of patient monitors heavily relies on manual experience, resulting in low diagnostic efficiency and ineffective utilization of fault maintenance text data. To address these issues, this paper proposes an intelligent fault diagnosis method for patient monitors based on multi-feature text representation, improved bidirectional gate recurrent unit (BiGRU) and attention mechanism. Firstly, the fault text data was preprocessed, and the word vectors containing multiple linguistic features was generated by linguistically-motivated bidirectional encoder representation from Transformer. Then, the bidirectional fault features were extracted and weighted by the improved BiGRU and attention mechanism respectively. Finally, the weighted loss function is used to reduce the impact of class imbalance on the model. To validate the effectiveness of the proposed method, this paper uses the patient monitor fault dataset for verification, and the macro F1 value has achieved 91.11%. The results show that the model built in this study can realize the automatic classification of fault text, and may provide assistant decision support for the intelligent fault diagnosis of the patient monitor in the future.
To accurately capture and effectively integrate the spatiotemporal features of electroencephalogram (EEG) signals for the purpose of improving the accuracy of EEG-based emotion recognition, this paper proposes a new method combining independent component analysis-recurrence plot with an improved EfficientNet version 2 (EfficientNetV2). First, independent component analysis is used to extract independent components containing spatial information from key channels of the EEG signals. These components are then converted into two-dimensional images using recurrence plot to better extract emotional features from the temporal information. Finally, the two-dimensional images are input into an improved EfficientNetV2, which incorporates a global attention mechanism and a triplet attention mechanism, and the emotion classification is output by the fully connected layer. To validate the effectiveness of the proposed method, this study conducts comparative experiments, channel selection experiments and ablation experiments based on the Shanghai Jiao Tong University Emotion Electroencephalogram Dataset (SEED). The results demonstrate that the average recognition accuracy of our method is 96.77%, which is significantly superior to existing methods, offering a novel perspective for research on EEG-based emotion recognition.
Existing classification methods for myositis ultrasound images have problems of poor classification performance or high computational cost. Motivated by this difficulty, a lightweight neural network based on a soft threshold attention mechanism is proposed to cater for a better IIMs classification. The proposed network was constructed by alternately using depthwise separable convolution (DSC) and conventional convolution (CConv). Moreover, a soft threshold attention mechanism was leveraged to enhance the extraction capabilities of key features. Compared with the current dual-branch feature fusion myositis classification network with the highest classification accuracy, the classification accuracy of the network proposed in this paper increased by 5.9%, reaching 96.1%, and its computational complexity was only 0.25% of the existing method. The obtained results support that the proposed method can provide physicians with more accurate classification results at a lower computational cost, thereby greatly assisting them in their clinical diagnosis.
The accurate segmentation of breast ultrasound images is an important precondition for the lesion determination. The existing segmentation approaches embrace massive parameters, sluggish inference speed, and huge memory consumption. To tackle this problem, we propose T2KD Attention U-Net (dual-Teacher Knowledge Distillation Attention U-Net), a lightweight semantic segmentation method combined double-path joint distillation in breast ultrasound images. Primarily, we designed two teacher models to learn the fine-grained features from each class of images according to different feature representation and semantic information of benign and malignant breast lesions. Then we leveraged the joint distillation to train a lightweight student model. Finally, we constructed a novel weight balance loss to focus on the semantic feature of small objection, solving the unbalance problem of tumor and background. Specifically, the extensive experiments conducted on Dataset BUSI and Dataset B demonstrated that the T2KD Attention U-Net outperformed various knowledge distillation counterparts. Concretely, the accuracy, recall, precision, Dice, and mIoU of proposed method were 95.26%, 86.23%, 85.09%, 83.59%and 77.78% on Dataset BUSI, respectively. And these performance indexes were 97.95%, 92.80%, 88.33%, 88.40% and 82.42% on Dataset B, respectively. Compared with other models, the performance of this model was significantly improved. Meanwhile, compared with the teacher model, the number, size, and complexity of student model were significantly reduced (2.2×106 vs. 106.1×106, 8.4 MB vs. 414 MB, 16.59 GFLOPs vs. 205.98 GFLOPs, respectively). Indeedy, the proposed model guarantees the performances while greatly decreasing the amount of computation, which provides a new method for the deployment of clinical medical scenarios.
Magnetic resonance (MR) imaging is an important tool for prostate cancer diagnosis, and accurate segmentation of MR prostate regions by computer-aided diagnostic techniques is important for the diagnosis of prostate cancer. In this paper, we propose an improved end-to-end three-dimensional image segmentation network using a deep learning approach to the traditional V-Net network (V-Net) network in order to provide more accurate image segmentation results. Firstly, we fused the soft attention mechanism into the traditional V-Net's jump connection, and combined short jump connection and small convolutional kernel to further improve the network segmentation accuracy. Then the prostate region was segmented using the Prostate MR Image Segmentation 2012 (PROMISE 12) challenge dataset, and the model was evaluated using the dice similarity coefficient (DSC) and Hausdorff distance (HD). The DSC and HD values of the segmented model could reach 0.903 and 3.912 mm, respectively. The experimental results show that the algorithm in this paper can provide more accurate three-dimensional segmentation results, which can accurately and efficiently segment prostate MR images and provide a reliable basis for clinical diagnosis and treatment.
Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.