Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.
Using electroencephalogram (EEG) signal to control external devices has always been the research focus in the field of brain-computer interface (BCI). This is especially significant for those disabilities who have lost capacity of movements. In this paper, the P300-based BCI and the microcontroller-based wireless radio frequency (RF) technology are utilized to design a smart home control system, which can be used to control household appliances, lighting system, and security devices directly. Experiment results showed that the system was simple, reliable and easy to be populirised.
The brain computer interface (BCI) can be used to control external devices directly through electroencephalogram (EEG) information. A multi-linear principal component analysis (MPCA) framework was used for the limitations of tensor form of multichannel EEG signals processing based on traditional principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA). Based on MPCA, we used the projection of tensor-matrix to achieve the goal of dimensionality reduction and features exaction. Then we used the Fisher linear classifier to classify the features. Furthermore, we used this novel method on the BCI competitionⅡdataset 4 and BCI competitionⅣdataset 3 in the experiment. The second-order tensor representation of time-space EEG data and the third-order tensor representation of time-space-frequency EEG data were used. The best results that were superior to those from other dimensionality reduction methods were obtained by much debugging on parameter P and testQ. For two-order tensor, the highest accuracy rates could be achieved as 81.0% and 40.1%, and for three-order tensor, the highest accuracy rates were 76.0% and 43.5%, respectively.
Individual differences of P300 potentials lead to that a large amount of training data must be collected to construct pattern recognition models in P300-based brain-computer interface system, which may cause subjects’ fatigue and degrade the system performance. TrAdaBoost is a method that transfers the knowledge from source area to target area, which improves learning effect in the target area. Our research purposed a TrAdaBoost-based linear discriminant analysis and a TrAdaBoost-based support vector machine to recognize the P300 potentials across multiple subjects. This method first trains two kinds of classifiers separately by using the data deriving from a small amount of data from same subject and a large amount of data from different subjects. Then it combines all the classifiers with different weights. Compared with traditional training methods that use only a small amount of data from same subject or mixed different subjects’ data to directly train, our algorithm improved the accuracies by 19.56% and 22.25% respectively, and improved the information transfer rate of 14.69 bits/min and 15.76 bits/min respectively. The results indicate that the TrAdaBoost-based method has the potential to enhance the generalization ability of brain-computer interface on the individual differences.
Error self-detection based on error-related potentials (ErrP) is promising to improve the practicability of brain-computer interface systems. But the single trial recognition of ErrP is still a challenge that hinters the development of this technology. To assess the performance of different algorithms on decoding ErrP, this paper test four kinds of linear discriminant analysis algorithms, two kinds of support vector machines, logistic regression, and discriminative canonical pattern matching (DCPM) on two open accessed datasets. All algorithms were evaluated by their classification accuracies and their generalization ability on different sizes of training sets. The study results show that DCPM has the best performance. This study shows a comprehensive comparison of different algorithms on ErrP classification, which could give guidance for the selection of ErrP algorithm.
The development and potential application of brain-computer interface (BCI) technology is closely related to the human brain, so that the ethical regulation of BCI has become an important issue attracting the consideration of society. Existing literatures have discussed the ethical norms of BCI technology from the perspectives of non-BCI developers and scientific ethics, while few discussions have been launched from the perspective of BCI developers. Therefore, there is a great need to study and discuss the ethical norms of BCI technology from the perspective of BCI developers. In this paper, we present the user-centered and non-harmful BCI technology ethics, and then discuss and look forward on them. This paper argues that human beings can cope with the ethical issues arising from BCI technology, and as BCI technology develops, its ethical norms will be improved continuously. It is expected that this paper can provide thoughts and references for the formulation of ethical norms related to BCI technology.
This paper aims to realize the decoding of single trial motor imagery electroencephalogram (EEG) signal by extracting and classifying the optimized features of EEG signal. In the classification and recognition of multi-channel EEG signals, there is often a lack of effective feature selection strategies in the selection of the data of each channel and the dimension of spatial filters. In view of this problem, a method combining sparse idea and greedy search (GS) was proposed to improve the feature extraction of common spatial pattern (CSP). The improved common spatial pattern could effectively overcome the problem of repeated selection of feature patterns in the feature vector space extracted by the traditional method, and make the extracted features have more obvious characteristic differences. Then the extracted features were classified by Fisher linear discriminant analysis (FLDA). The experimental results showed that the classification accuracy obtained by proposed method was 19% higher on average than that of traditional common spatial pattern. And high classification accuracy could be obtained by selecting feature set with small size. The research results obtained in the feature extraction of EEG signals lay the foundation for the realization of motor imagery EEG decoding.
Regarding to the channel selection problem during the classification of electroencephalogram (EEG) signals, we proposed a novel method, Relief-SBS, in this paper. Firstly, the proposed method performed EEG channel selection by combining the principles of Relief and sequential backward selection (SBS) algorithms. And then correlation coefficient was used for classification of EEG signals. The selected channels that achieved optimal classification accuracy were considered as optimal channels. The data recorded from motor imagery task experiments were analyzed, and the results showed that the channels selected with our proposed method achieved excellent classification accuracy, and also outperformed other feature selection methods. In addition, the distribution of the optimal channels was proved to be consistent with the neurophysiological knowledge. This demonstrates the effectiveness of our method. It can be well concluded that our proposed method, Relief-SBS, provides a new way for channel selection.
As the most common active brain-computer interaction paradigm, motor imagery brain-computer interface (MI-BCI) suffers from the bottleneck problems of small instruction set and low accuracy, and its information transmission rate (ITR) and practical application are severely limited. In this study, we designed 6-class imagination actions, collected electroencephalogram (EEG) signals from 19 subjects, and studied the effect of collaborative brain-computer interface (cBCI) collaboration strategy on MI-BCI classification performance, the effects of changes in different group sizes and fusion strategies on group multi-classification performance are compared. The results showed that the most suitable group size was 4 people, and the best fusion strategy was decision fusion. In this condition, the classification accuracy of the group reached 77%, which was higher than that of the feature fusion strategy under the same group size (77.31% vs. 56.34%), and was significantly higher than that of the average single user (77.31% vs. 44.90%). The research in this paper proves that the cBCI collaboration strategy can effectively improve the MI-BCI classification performance, which lays the foundation for MI-cBCI research and its future application.
Multi-modal brain-computer interface and multi-modal brain function imaging are developing trends for the present and future. Aiming at multi-modal brain-computer interface based on electroencephalogram-near infrared spectroscopy (EEG-NIRS) and in order to simultaneously acquire the brain activity of motor area, an acquisition helmet by NIRS combined with EEG was designed and verified by the experiment. According to the 10-20 system or 10-20 extended system, the diameter and spacing of NIRS probe and EEG electrode, NIRS probes were aligned with C3 and C4 as the reference electrodes, and NIRS probes were placed in the middle position between EEG electrodes to simultaneously measure variations of NIRS and the corresponding variation of EEG in the same functional brain area. The clamp holder and near infrared probe were coupled by tightening a screw. To verify the feasibility and effectiveness of the multi-modal EEG-NIRS helmet, NIRS and EEG signals were collected from six healthy subjects during six mental tasks involving the right hand clenching force and speed motor imagery. These signals may reflect brain activity related to hand clenching force and speed motor imagery in a certain extent. The experiment showed that the EEG-NIRS helmet designed in the paper was feasible and effective. It not only could provide support for the multi-modal motor imagery brain-computer interface based on EEG-NIRS, but also was expected to provide support for multi-modal brain functional imaging based on EEG-NIRS.