To reduce the label dependency of traditional electroencephalogram(EEG) emotion recognition methods and address the limitations of existing contrastive learning approaches in modeling cross-stimulus emotional similarity, this paper proposes a group-level stimulus-aware self-supervised soft contrastive learning framework (GSCL) for EEG emotion recognition. GSCL constructs contrastive learning tasks based on the consistency of subjects' brain activities under identical stimuli and incorporates a soft assignment mechanism, which adaptively adjusts the weights of negative sample pairs according to inter-sample distances to enhance representation quality. Additionally, this study also designs a learnable shuffling-splitting data augmentation method to dynamically optimize data distribution via learnable shuffling parameters. Finally, on the public emotional dataset (DEAP), the proposed method achieves accuracies of 94.91%, 95.29%, and 92.78% for valence, arousal, and four-class classification tasks, respectively; while on the Shanghai Jiao Tong University Emotional EEG Dataset (SEED), its three-class classification accuracy reaches 95.25% as well. These results demonstrate that the proposed method yields higher classification accuracy, offering a new insight for self-supervised EEG emotion recognition.
Sleep stage classification is essential for clinical disease diagnosis and sleep quality assessment. Most of the existing methods for sleep stage classification are based on single-channel or single-modal signal, and extract features using a single-branch, deep convolutional network, which not only hinders the capture of the diversity features related to sleep and increase the computational cost, but also has a certain impact on the accuracy of sleep stage classification. To solve this problem, this paper proposes an end-to-end multi-modal physiological time-frequency feature extraction network (MTFF-Net) for accurate sleep stage classification. First, multi-modal physiological signal containing electroencephalogram (EEG), electrocardiogram (ECG), electrooculogram (EOG) and electromyogram (EMG) are converted into two-dimensional time-frequency images containing time-frequency features by using short time Fourier transform (STFT). Then, the time-frequency feature extraction network combining multi-scale EEG compact convolution network (Ms-EEGNet) and bidirectional gated recurrent units (Bi-GRU) network is used to obtain multi-scale spectral features related to sleep feature waveforms and time series features related to sleep stage transition. According to the American Academy of Sleep Medicine (AASM) EEG sleep stage classification criterion, the model achieved 84.3% accuracy in the five-classification task on the third subgroup of the Institute of Systems and Robotics of the University of Coimbra Sleep Dataset (ISRUC-S3), with 83.1% macro F1 score value and 79.8% Cohen’s Kappa coefficient. The experimental results show that the proposed model achieves higher classification accuracy and promotes the application of deep learning algorithms in assisting clinical decision-making.