Electroencephalography (EEG) signals are strongly correlated with human emotions. The importance of nodes in the emotional brain network provides an effective means to analyze the emotional brain mechanism. In this paper, a new ranking method of node importance, weighted K-order propagation number method, was used to design and implement a classification algorithm for emotional brain networks. Firstly, based on DEAP emotional EEG data, a cross-sample entropy brain network was constructed, and the importance of nodes in positive and negative emotional brain networks was sorted to obtain the feature matrix under multi-threshold scales. Secondly, feature extraction and support vector machine (SVM) were used to classify emotion. The classification accuracy was 83.6%. The results show that it is effective to use the weighted K-order propagation number method to extract the importance characteristics of brain network nodes for emotion classification, which provides a new means for feature extraction and analysis of complex networks.
Existing arrhythmia classification methods usually use manual selection of electrocardiogram (ECG) signal features, so that the feature selection is subjective, and the feature extraction is complex, leaving the classification accuracy usually affected. Based on this situation, a new method of arrhythmia automatic classification based on discriminative deep belief networks (DDBNs) is proposed. The morphological features of heart beat signals are automatically extracted from the constructed generative restricted Boltzmann machine (GRBM), then the discriminative restricted Boltzmann machine (DRBM) with feature learning and classification ability is introduced, and arrhythmia classification is performed according to the extracted morphological features and RR interval features. In order to further improve the classification performance of DDBNs, DDBNs are converted to deep neural network (DNN) using the Softmax regression layer for supervised classification in this paper, and the network is fine-tuned by backpropagation. Finally, the Massachusetts Institute of Technology and Beth Israel Hospital Arrhythmia Database (MIT-BIH AR) is used for experimental verification. For training sets and test sets with consistent data sources, the overall classification accuracy of the method is up to 99.84% ± 0.04%. For training sets and test sets with inconsistent data sources, a small number of training sets are extended by the active learning (AL) method, and the overall classification accuracy of the method is up to 99.31% ± 0.23%. The experimental results show the effectiveness of the method in arrhythmia automatic feature extraction and classification. It provides a new solution for the automatic extraction of ECG signal features and classification for deep learning.
Takayasu arteritis (TA) is a chronic nonspecific inflammation that commonly occurs in the aorta and its main branches. Most patients with TA are lack of clinical manifestations, leading to misdiagnosis. When the TA is correctly diagnosed, the patients may already have stenosis or occlusion in the involved arteries, resulting in arterial ischemia and hypoxia symptoms, and in severe cases it will be life-threatening. Contrast-enhanced ultrasonography (CEUS) is an emerging method for assessing TA, but the assessment relies heavily on experiences of radiologists performing manual and qualitative analyses, so the diagnostic results are often not accurate. To overcome this limitation, this paper presents a computer-assisted quantitative analysis of TA carotid artery lesions based on CEUS. First, the TA lesion was outlined on the carotid wall, and one homogeneous rectangle and one polygon were selected as two reference regions in the carotid lumen. The temporal and spatial features of the lesion region and the reference regions were then calculated. Furthermore, the difference and ratio of the features between the lesion and the reference regions were computed as new features (to eliminate interference factors). Finally, the correlation was analyzed between the CEUS features and inflammation biomarkers consisting of erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP). The data in this paper were collected from 34 TA patients in Zhongshan Hospital undergoing CEUS examination with a total of thirty-seven carotid lesions, where two patients were with two lesions before and after treatment and one patient was with left and right bilateral lesions. Among these patients, 13 were untreated primary patients with a total of 14 lesions, where one patient was with bilateral lesions. The results showed that for all patients, the neovascularization area ratio in the 1/3 inner region of a lesion (ARi1/3) achieved a correlation coefficient (r) of 0.56 (P=0.001) with CRP, and for the primary patients, the neovascularization area ratio in the 1/2 inner region of a lesion (ARi1/2) had an r-value of 0.76 (P=0.001) with CRP. This study indicates that the proposed computer-assisted method can objectively and semi-automatically extract quantitative features from CEUS images, so as to reduce the effect on diagnosis due to subjective experiences of the radiologists, and thus it is expected to be used for clinical diagnosis and severity evaluation of TA carotid lesions.
Biometrics plays an important role in information society. As a new type of biometrics, electroencephalogram (EEG) signals have special advantages in terms of versatility, durability, and safety. At present, the researches on individual identification approaches based on EEG signals draw lots of attention. Identity feature extraction is an important step to achieve good identification performance. How to combine the characteristics of EEG data to better extract the difference information in EEG signals is a research hotspots in the field of identity identification based on EEG in recent years. This article reviewed the commonly used identity feature extraction methods based on EEG signals, including single-channel features, inter-channel features, deep learning methods and spatial filter-based feature extraction methods, etc. and explained the basic principles application methods and related achievements of various feature extraction methods. Finally, we summarized the current problems and forecast the development trend.
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups ofⅣ_ⅢandⅣ_Ⅰ. The experimental results proved that the method proposed in this paper was feasible.
It is of great clinical significance in the differential diagnosis of primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM) because there are enormous differences between them in terms of therapeutic regimens. In this paper, we propose a system based on sparse representation for automatic classification of PCNSL and GBM. The proposed system distinguishes the two tumors by using of the different texture detail information of the two tumors on T1 contrast magnetic resonance imaging (MRI) images. First, inspired by the process of radiomics, we designed a dictionary learning and sparse representation-based method to extract texture information, and with this approach, the tumors with different volume and shape were transformed into 968 quantitative texture features. Next, aiming at the problem of the redundancy in the extracted features, feature selection based on iterative sparse representation was set up to select some key texture features with high stability and discrimination. Finally, the selected key features are used for differentiation based on sparse representation classification (SRC) method. By using ten-fold cross-validation method, the differentiation based on the proposed approach presents accuracy of 96.36%, sensitivity 96.30%, and specificity 96.43%. Experimental results show that our approach not only effectively distinguish the two tumors but also has strong robustness in practical application since it avoids the process of parameter extraction on advanced MRI images.
Feature extraction is a very crucial step in P300-based brain-computer interface (BCI) and independent component analysis (ICA) is a suitable P300 feature extraction method. But at present the convergence performance of the general ICA iteration methods are not very satisfactory. In this paper, a method based on quantum particle swarm optimizer (QPSO) algorithm and ICA technique is put forward for P300 extraction. In this method, quantum computing is used to impel ICA iteration to globally converge faster. It achieved the purpose of extracting P300 rapidly and efficiently. The method was tested on two public datasets of BCI Competition Ⅱ and Ⅲ, and a simple linear classifier was employed to classify the extracted P300 features. The recognition accuracy reached 94.4% with 15 times averaged. The results showed that the proposed method could extract P300 rapidly and the extraction effect did not reduce. It provides an experimental basis for further study of real-time BCI system.
Human motion recognition (HAR) is the technological base of intelligent medical treatment, sports training, video monitoring and many other fields, and it has been widely concerned by all walks of life. This paper summarized the progress and significance of HAR research, which includes two processes: action capture and action classification based on deep learning. Firstly, the paper introduced in detail three mainstream methods of action capture: video-based, depth camera-based and inertial sensor-based. The commonly used action data sets were also listed. Secondly, the realization of HAR based on deep learning was described in two aspects, including automatic feature extraction and multi-modal feature fusion. The realization of training monitoring and simulative training with HAR in orthopedic rehabilitation training was also introduced. Finally, it discussed precise motion capture and multi-modal feature fusion of HAR, as well as the key points and difficulties of HAR application in orthopedic rehabilitation training. This article summarized the above contents to quickly guide researchers to understand the current status of HAR research and its application in orthopedic rehabilitation training.
Skin aging is the most intuitive and obvious sign of the human aging processes. Qualitative and quantitative determination of skin aging is of particular importance for the evaluation of human aging and anti-aging treatment effects. To solve the problem of subjectivity of conventional skin aging grading methods, the self-organizing map (SOM) network was used to explore an automatic method for skin aging grading. First, the ventral forearm skin images were obtained by a portable digital microscope and two texture parameters, i.e., mean width of skin furrows and the number of intersections were extracted by image processing algorithm. Then, the values of texture parameters were taken as inputs of SOM network to train the network. The experimental results showed that the network achieved an overall accuracy of 80.8%, compared with the aging grading results by human graders. The designed method appeared to be rapid and objective, which can be used for quantitative analysis of skin images, and automatic assessment of skin aging grading.
Early screening based on computed tomography (CT) pulmonary nodule detection is an important means to reduce lung cancer mortality, and in recent years three dimensional convolutional neural network (3D CNN) has achieved success and continuous development in the field of lung nodule detection. We proposed a pulmonary nodule detection algorithm by using 3D CNN based on a multi-scale attention mechanism. Aiming at the characteristics of different sizes and shapes of lung nodules, we designed a multi-scale feature extraction module to extract the corresponding features of different scales. Through the attention module, the correlation information between the features was mined from both spatial and channel perspectives to strengthen the features. The extracted features entered into a pyramid-similar fusion mechanism, so that the features would contain both deep semantic information and shallow location information, which is more conducive to target positioning and bounding box regression. On representative LUNA16 datasets, compared with other advanced methods, this method significantly improved the detection sensitivity, which can provide theoretical reference for clinical medicine.