The accurate segmentation of breast ultrasound images is an important precondition for the lesion determination. The existing segmentation approaches embrace massive parameters, sluggish inference speed, and huge memory consumption. To tackle this problem, we propose T2KD Attention U-Net (dual-Teacher Knowledge Distillation Attention U-Net), a lightweight semantic segmentation method combined double-path joint distillation in breast ultrasound images. Primarily, we designed two teacher models to learn the fine-grained features from each class of images according to different feature representation and semantic information of benign and malignant breast lesions. Then we leveraged the joint distillation to train a lightweight student model. Finally, we constructed a novel weight balance loss to focus on the semantic feature of small objection, solving the unbalance problem of tumor and background. Specifically, the extensive experiments conducted on Dataset BUSI and Dataset B demonstrated that the T2KD Attention U-Net outperformed various knowledge distillation counterparts. Concretely, the accuracy, recall, precision, Dice, and mIoU of proposed method were 95.26%, 86.23%, 85.09%, 83.59%and 77.78% on Dataset BUSI, respectively. And these performance indexes were 97.95%, 92.80%, 88.33%, 88.40% and 82.42% on Dataset B, respectively. Compared with other models, the performance of this model was significantly improved. Meanwhile, compared with the teacher model, the number, size, and complexity of student model were significantly reduced (2.2×106 vs. 106.1×106, 8.4 MB vs. 414 MB, 16.59 GFLOPs vs. 205.98 GFLOPs, respectively). Indeedy, the proposed model guarantees the performances while greatly decreasing the amount of computation, which provides a new method for the deployment of clinical medical scenarios.
ObjectiveTo systematically review the effect of media multitasking on working memory and attention among adolescents. MethodsCNKI, CBM, WanFang Data, VIP, PubMed, Web of Science, and EMbase databases were electronically searched to collect cross-sectional studies on the effect of media multitasking on working memory and attention among adolescents from inception to January 1st, 2021. Two reviewers independently screened literature, extracted data, and assessed the risk of bias of included studies; then, meta-analysis was performed using Stata 15.1 software. ResultsA total of 16 cross-sectional studies were included. The results of meta-analysis showed that there were negative correlations between media multitasking and working memory (Cohen's d=0.40, 95%CI 0.14 to 0.66, P=0.003), as well as in attention (Cohen's d=1.02, 95%CI 0.58 to 1.47, P<0.001). ConclusionCurrent evidence shows that media multitasking has negative impact on working memory and attention. Due to limited quality and quantity of the included studies, more high-quality studies are required to verify the above conclusion.
The conventional fault diagnosis of patient monitors heavily relies on manual experience, resulting in low diagnostic efficiency and ineffective utilization of fault maintenance text data. To address these issues, this paper proposes an intelligent fault diagnosis method for patient monitors based on multi-feature text representation, improved bidirectional gate recurrent unit (BiGRU) and attention mechanism. Firstly, the fault text data was preprocessed, and the word vectors containing multiple linguistic features was generated by linguistically-motivated bidirectional encoder representation from Transformer. Then, the bidirectional fault features were extracted and weighted by the improved BiGRU and attention mechanism respectively. Finally, the weighted loss function is used to reduce the impact of class imbalance on the model. To validate the effectiveness of the proposed method, this paper uses the patient monitor fault dataset for verification, and the macro F1 value has achieved 91.11%. The results show that the model built in this study can realize the automatic classification of fault text, and may provide assistant decision support for the intelligent fault diagnosis of the patient monitor in the future.
Attention level evaluation refers to the evaluation of people's attention level through observation or experimental testing, and its research results have great application value in education and teaching, intelligent driving, medical health and other fields. With its objective reliability and security, electroencephalogram signals have become one of the most important technical means to analyze and express attention level. At present, there is little review literature that comprehensively summarize the application of electroencephalogram signals in the field of attention evaluation. To this end, this paper first summarizes the research progress on attention evaluation; then the important methods for electroencephalogram attention evaluation are analyzed, including data preprocessing, feature extraction and selection, attention evaluation methods, etc.; finally, the shortcomings of the current development in the field of electroencephalogram attention evaluation are discussed, and the future development trend is prospected, to provide research references for researchers in related fields.
In response to the issues of single-scale information loss and large model parameter size during the sampling process in U-Net and its variants for medical image segmentation, this paper proposes a multi-scale medical image segmentation method based on pixel encoding and spatial attention. Firstly, by redesigning the input strategy of the Transformer structure, a pixel encoding module is introduced to enable the model to extract global semantic information from multi-scale image features, obtaining richer feature information. Additionally, deformable convolutions are incorporated into the Transformer module to accelerate convergence speed and improve module performance. Secondly, a spatial attention module with residual connections is introduced to allow the model to focus on the foreground information of the fused feature maps. Finally, through ablation experiments, the network is lightweighted to enhance segmentation accuracy and accelerate model convergence. The proposed algorithm achieves satisfactory results on the Synapse dataset, an official public dataset for multi-organ segmentation provided by the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), with Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) scores of 77.65 and 18.34, respectively. The experimental results demonstrate that the proposed algorithm can enhance multi-organ segmentation performance, potentially filling the gap in multi-scale medical image segmentation algorithms, and providing assistance for professional physicians in diagnosis.
Existing classification methods for myositis ultrasound images have problems of poor classification performance or high computational cost. Motivated by this difficulty, a lightweight neural network based on a soft threshold attention mechanism is proposed to cater for a better IIMs classification. The proposed network was constructed by alternately using depthwise separable convolution (DSC) and conventional convolution (CConv). Moreover, a soft threshold attention mechanism was leveraged to enhance the extraction capabilities of key features. Compared with the current dual-branch feature fusion myositis classification network with the highest classification accuracy, the classification accuracy of the network proposed in this paper increased by 5.9%, reaching 96.1%, and its computational complexity was only 0.25% of the existing method. The obtained results support that the proposed method can provide physicians with more accurate classification results at a lower computational cost, thereby greatly assisting them in their clinical diagnosis.
The processing mechanism of the human brain for speech information is a significant source of inspiration for the study of speech enhancement technology. Attention and lateral inhibition are key mechanisms in auditory information processing that can selectively enhance specific information. Building on this, the study introduces a dual-branch U-Net that integrates lateral inhibition and feedback-driven attention mechanisms. Noisy speech signals input into the first branch of the U-Net led to the selective feedback of time-frequency units with high confidence. The generated activation layer gradients, in conjunction with the lateral inhibition mechanism, were utilized to calculate attention maps. These maps were then concatenated to the second branch of the U-Net, directing the network’s focus and achieving selective enhancement of auditory speech signals. The evaluation of the speech enhancement effect was conducted by utilising five metrics, including perceptual evaluation of speech quality. This method was compared horizontally with five other methods: Wiener, SEGAN, PHASEN, Demucs and GRN. The experimental results demonstrated that the proposed method improved speech signal enhancement capabilities in various noise scenarios by 18% to 21% compared to the baseline network across multiple performance metrics. This improvement was particularly notable in low signal-to-noise ratio conditions, where the proposed method exhibited a significant performance advantage over other methods. The speech enhancement technique based on lateral inhibition and feedback-driven attention mechanisms holds significant potential in auditory speech enhancement, making it suitable for clinical practices related to artificial cochleae and hearing aids.
The brain-computer interface (BCI) based on motor imagery electroencephalography (MI-EEG) enables direct information interaction between the human brain and external devices. In this paper, a multi-scale EEG feature extraction convolutional neural network model based on time series data enhancement is proposed for decoding MI-EEG signals. First, an EEG signals augmentation method was proposed that could increase the information content of training samples without changing the length of the time series, while retaining its original features completely. Then, multiple holistic and detailed features of the EEG data were adaptively extracted by multi-scale convolution module, and the features were fused and filtered by parallel residual module and channel attention. Finally, classification results were output by a fully connected network. The application experimental results on the BCI Competition IV 2a and 2b datasets showed that the proposed model achieved an average classification accuracy of 91.87% and 87.85% for the motor imagery task, respectively, which had high accuracy and strong robustness compared with existing baseline models. The proposed model does not require complex signals pre-processing operations and has the advantage of multi-scale feature extraction, which has high practical application value.
Glioma is a primary brain tumor with high incidence rate. High-grade gliomas (HGG) are those with the highest degree of malignancy and the lowest degree of survival. Surgical resection and postoperative adjuvant chemoradiotherapy are often used in clinical treatment, so accurate segmentation of tumor-related areas is of great significance for the treatment of patients. In order to improve the segmentation accuracy of HGG, this paper proposes a multi-modal glioma semantic segmentation network with multi-scale feature extraction and multi-attention fusion mechanism. The main contributions are, (1) Multi-scale residual structures were used to extract features from multi-modal gliomas magnetic resonance imaging (MRI); (2) Two types of attention modules were used for features aggregating in channel and spatial; (3) In order to improve the segmentation performance of the whole network, the branch classifier was constructed using ensemble learning strategy to adjust and correct the classification results of the backbone classifier. The experimental results showed that the Dice coefficient values of the proposed segmentation method in this article were 0.909 7, 0.877 3 and 0.839 6 for whole tumor, tumor core and enhanced tumor respectively, and the segmentation results had good boundary continuity in the three-dimensional direction. Therefore, the proposed semantic segmentation network has good segmentation performance for high-grade gliomas lesions.
ObjectiveTo observe the effect of sensory integration training combined with methylphenidate hydrochloride on attention deficit hyperactivity disorder (ADHD).
MethodsThe clinical data of 96 patients with ADHD diagnosed between January 2009 and March 2013 were retrospectively analyzed. The patients were divided into two groups by the table of random number. The trail group (n=48) received the combination therapy of sensory integration training combined with methylphenidate hydrochloride; while the control group (n=48) only received the medication of methylphenidate hydrochloride. The scores of sensory integration ability rating scale, integrated visual and auditory continuous performance test (IVA-CPT), Conner's behavior rating scale, Chinese Wechsler Intelligence Scale for Children (C-WISC) and adverse reactions were observed and compared between the two groups.
ResultsThe scores of the sensory integration ability rating scale, FRCQ, FAQ (IVA-CPT), PIQ, VIQ, FIQ, C factor (C-WISC) in both of the two groups were significantly higher after the therapy; while the scores of the study, behavior, somatopsychic disturbance, impulsion, hyperactivity index and anxiety factor significantly decreased after the treatment (P<0.05). Compared with the control group, the trial group's scores of sensory integration ability rating scale, IVA-CPT, Conner's behavior rating scale, C-WISC were improved obviously, and the adverse reactions were significantly less (P<0.05).
ConclusionThe sensory integration training combined with methylphenidate hydrochloride is sage and effective on children with attention deficit hyperactivity disorder.