As the most common active brain-computer interaction paradigm, motor imagery brain-computer interface (MI-BCI) suffers from the bottleneck problems of small instruction set and low accuracy, and its information transmission rate (ITR) and practical application are severely limited. In this study, we designed 6-class imagination actions, collected electroencephalogram (EEG) signals from 19 subjects, and studied the effect of collaborative brain-computer interface (cBCI) collaboration strategy on MI-BCI classification performance, the effects of changes in different group sizes and fusion strategies on group multi-classification performance are compared. The results showed that the most suitable group size was 4 people, and the best fusion strategy was decision fusion. In this condition, the classification accuracy of the group reached 77%, which was higher than that of the feature fusion strategy under the same group size (77.31% vs. 56.34%), and was significantly higher than that of the average single user (77.31% vs. 44.90%). The research in this paper proves that the cBCI collaboration strategy can effectively improve the MI-BCI classification performance, which lays the foundation for MI-cBCI research and its future application.
Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.
Lung nodules are the main manifestation of early lung cancer. So accurate detection of lung nodules is of great significance for early diagnosis and treatment of lung cancer. However, the rapid and accurate detection of pulmonary nodules is a challenging task due to the complex background, large detection range of pulmonary computed tomography (CT) images and the different sizes and shapes of pulmonary nodules. Therefore, this paper proposes a multi-scale feature fusion algorithm for the automatic detection of pulmonary nodules to achieve accurate detection of pulmonary nodules. Firstly, a three-layer modular lung nodule detection model was designed on the deep convolutional network (VGG16) for large-scale image recognition. The first-tier module of the network is used to extract the features of pulmonary nodules in CT images and roughly estimate the location of pulmonary nodules. Then the second-tier module of the network is used to fuse multi-scale image features to further enhance the details of pulmonary nodules. The third-tier module of the network was fused to analyze the features of the first-tier and the second-tier module of the network, and the candidate box of pulmonary nodules in multi-scale was obtained. Finally, the candidate box of pulmonary nodules under multi-scale was analyzed with the method of non-maximum suppression, and the final location of pulmonary nodules was obtained. The algorithm is validated by the data of pulmonary nodules on LIDC-IDRI common data set. The average detection accuracy is 90.9%.
In order to solve the current problems in medical equipment maintenance, this study proposed an intelligent fault diagnosis method for medical equipment based on long short term memory network(LSTM). Firstly, in the case of no circuit drawings and unknown circuit board signal direction, the symptom phenomenon and port electrical signal of 7 different fault categories were collected, and the feature coding, normalization, fusion and screening were preprocessed. Then, the intelligent fault diagnosis model was built based on LSTM, and the fused and screened multi-modal features were used to carry out the fault diagnosis classification and identification experiment. The results were compared with those using port electrical signal, symptom phenomenon and the fusion of the two types. In addition, the fault diagnosis algorithm was compared with BP neural network (BPNN), recurrent neural network (RNN) and convolution neural network (CNN). The results show that based on the fused and screened multi-modal features, the average classification accuracy of LSTM algorithm model reaches 0.970 9, which is higher than that of using port electrical signal alone, symptom phenomenon alone or the fusion of the two types. It also has higher accuracy than BPNN, RNN and CNN, which provides a relatively feasible new idea for intelligent fault diagnosis of similar equipment.
Signal classification is a key of brain-computer interface (BCI). In this paper, we present a new method for classifying the electroencephalogram (EEG) signals of which the features are heterogeneous. This method is called wrapped elastic net feature selection and classification. Firstly, we used the joint application of time-domain statistic, power spectral density (PSD), common spatial pattern (CSP) and autoregressive (AR) model to extract high-dimensional fused features of the preprocessed EEG signals. Then we used the wrapped method for feature selection. We fitted the logistic regression model penalized with elastic net on the training data, and obtained the parameter estimation by coordinate descent method. Then we selected best feature subset by using 10-fold cross-validation. Finally, we classified the test sample using the trained model. Data used in the experiment were the EEG data from international BCI Competition Ⅳ. The results showed that the method proposed was suitable for fused feature selection with high-dimension. For identifying EEG signals, it is more effective and faster, and can single out a more relevant subset to obtain a relatively simple model. The average test accuracy reached 81.78%.
In the process of lower limb rehabilitation training, fatigue estimation is of great significance to improve the accuracy of intention recognition and avoid secondary injury. However, most of the existing methods only consider surface electromyography (sEMG) features but ignore electrocardiogram (ECG) features when performing in fatigue estimation, which leads to the low and unstable recognition efficiency. Aiming at this problem, a method that uses the fusion features of ECG and sEMG signal to estimate the fatigue during lower limb rehabilitation was proposed, and an improved particle swarm optimization-support vector machine classifier (improved PSO-SVM) was proposed and used to identify the fusion feature vector. Finally, the accurate recognition of the three states of relax, transition and fatigue was achieved, and the recognition rates were 98.5%, 93.5%, and 95.5%, respectively. Comparative experiments showed that the average recognition rate of this method was 4.50% higher than that of sEMG features alone, and 13.66% higher than that of the combined features of ECG and sEMG without feature fusion. It is proved that the feature fusion of ECG and sEMG signals in the process of lower limb rehabilitation training can be used for recognizing fatigue more accurately.
The result of the emotional state induced by music may provide theoretical support and help for assisted music therapy. The key to assessing the state of emotion is feature extraction of the emotional electroencephalogram (EEG). In this paper, we study the performance optimization of the feature extraction algorithm. A public multimodal database for emotion analysis using physiological signals (DEAP) proposed by Koelstra et al. was applied. Eight kinds of positive and negative emotions were extracted from the dataset, representing the data of fourteen channels from the different regions of brain. Based on wavelet transform, δ, θ, α and β rhythms were extracted. This paper analyzed and compared the performances of three kinds of EEG features for emotion classification, namely wavelet features (wavelet coefficients energy and wavelet entropy), approximate entropy and Hurst exponent. On this basis, an EEG feature fusion algorithm based on principal component analysis (PCA) was proposed. The principal component with a cumulative contribution rate more than 85% was retained, and the parameters which greatly varied in characteristic root were selected. The support vector machine was used to assess the state of emotion. The results showed that the average accuracy rates of emotional classification with wavelet features, approximate entropy and Hurst exponent were respectively 73.15%, 50.00% and 45.54%. By combining these three methods, the features fused with PCA possessed an accuracy of about 85%. The obtained classification accuracy by using the proposed fusion algorithm based on PCA was improved at least 12% than that by using single feature, providing assistance for emotional EEG feature extraction and music therapy.
Objective To propose a heart sound segmentation method based on multi-feature fusion network. Methods Data were obtained from the CinC/PhysioNet 2016 Challenge dataset (a total of 3 153 recordings from 764 patients, about 91.93% of whom were male, with an average age of 30.36 years). Firstly the features were extracted in time domain and time-frequency domain respectively, and reduced redundant features by feature dimensionality reduction. Then, we selected optimal features separately from the two feature spaces that performed best through feature selection. Next, the multi-feature fusion was completed through multi-scale dilated convolution, cooperative fusion, and channel attention mechanism. Finally, the fused features were fed into a bidirectional gated recurrent unit (BiGRU) network to heart sound segmentation results. Results The proposed method achieved precision, recall and F1 score of 96.70%, 96.99%, and 96.84% respectively. Conclusion The multi-feature fusion network proposed in this study has better heart sound segmentation performance, which can provide high-accuracy heart sound segmentation technology support for the design of automatic analysis of heart diseases based on heart sounds.
Diabetic retinopathy (DR) and its complication, diabetic macular edema (DME), are major causes of visual impairment and even blindness. The occurrence of DR and DME is pathologically interconnected, and their clinical diagnoses are closely related. Joint learning can help improve the accuracy of diagnosis. This paper proposed a novel adaptive lesion-aware fusion network (ALFNet) to facilitate the joint grading of DR and DME. ALFNet employed DenseNet-121 as the backbone and incorporated an adaptive lesion attention module (ALAM) to capture the distinct lesion characteristics of DR and DME. A deep feature fusion module (DFFM) with a shared-parameter local attention mechanism was designed to learn the correlation between the two diseases. Furthermore, a four-branch composite loss function was introduced to enhance the network’s multi-task learning capability. Experimental results demonstrated that ALFNet achieved superior joint grading performance on the Messidor dataset, with joint accuracy rates of 0.868 (DR 2 & DME 3), outperforming state-of-the-art methods. These results highlight the unique advantages of the proposed approach in the joint grading of DR and DME, thereby improving the efficiency and accuracy of clinical decision-making.