In the diagnosis of cardiovascular diseases, the analysis of electrocardiogram (ECG) signals has always played a crucial role. At present, how to effectively identify abnormal heart beats by algorithms is still a difficult task in the field of ECG signal analysis. Based on this, a classification model that automatically identifies abnormal heartbeats based on deep residual network (ResNet) and self-attention mechanism was proposed. Firstly, this paper designed an 18-layer convolutional neural network (CNN) based on the residual structure, which helped model fully extract the local features. Then, the bi-directional gated recurrent unit (BiGRU) was used to explore the temporal correlation for further obtaining the temporal features. Finally, the self-attention mechanism was built to weight important information and enhance model's ability to extract important features, which helped model achieve higher classification accuracy. In addition, in order to mitigate the interference on classification performance due to data imbalance, the study utilized multiple approaches for data augmentation. The experimental data in this study came from the arrhythmia database constructed by MIT and Beth Israel Hospital (MIT-BIH), and the final results showed that the proposed model achieved an overall accuracy of 98.33% on the original dataset and 99.12% on the optimized dataset, which demonstrated that the proposed model can achieve good performance in ECG signal classification, and possessed potential value for application to portable ECG detection devices.
Sleep stage classification is essential for clinical disease diagnosis and sleep quality assessment. Most of the existing methods for sleep stage classification are based on single-channel or single-modal signal, and extract features using a single-branch, deep convolutional network, which not only hinders the capture of the diversity features related to sleep and increase the computational cost, but also has a certain impact on the accuracy of sleep stage classification. To solve this problem, this paper proposes an end-to-end multi-modal physiological time-frequency feature extraction network (MTFF-Net) for accurate sleep stage classification. First, multi-modal physiological signal containing electroencephalogram (EEG), electrocardiogram (ECG), electrooculogram (EOG) and electromyogram (EMG) are converted into two-dimensional time-frequency images containing time-frequency features by using short time Fourier transform (STFT). Then, the time-frequency feature extraction network combining multi-scale EEG compact convolution network (Ms-EEGNet) and bidirectional gated recurrent units (Bi-GRU) network is used to obtain multi-scale spectral features related to sleep feature waveforms and time series features related to sleep stage transition. According to the American Academy of Sleep Medicine (AASM) EEG sleep stage classification criterion, the model achieved 84.3% accuracy in the five-classification task on the third subgroup of the Institute of Systems and Robotics of the University of Coimbra Sleep Dataset (ISRUC-S3), with 83.1% macro F1 score value and 79.8% Cohen’s Kappa coefficient. The experimental results show that the proposed model achieves higher classification accuracy and promotes the application of deep learning algorithms in assisting clinical decision-making.
Objective To propose a lightweight end-to-end neural network model for automated Korotkoff sound phase recognition and subsequent blood pressure (BP) measurement, aiming to improve measurement accuracy and population adaptability. Methods We developed a streamlined architecture integrating depthwise separable convolution (DSConv), multi-head attention (MHA), and bidirectional gated recurrent unit (BiGRU). The model directly processes Korotkoff sound time-series signals to identify auscultatory phases. Systolic BP (SBP) and diastolic BP (DBP) were determined using phase Ⅰ and phaseⅤdetections, respectively. Given the clinical relevance of phase Ⅳ for specific populations (e.g., children and pregnant women, denoted as DBPⅣ), BP values from this phase were also recorded.ResultsThe study enrolled 106 volunteers with 70 males and 36 females at mean age of (40.0±12.0) years. The model achieved 94.25% phase recognition accuracy. Measurement errors were (0.1±2.5) mm Hg (SBP), (0.9±3.4) mm Hg (DBPⅣ), and (0.8±2.6) mm Hg (DBP). Conclusion Our method enables precise phase recognition and BP measurement, demonstrating potential for developing population-adaptive blood pressure monitoring systems.