This paper proposes a motor imagery recognition algorithm based on feature fusion and transfer adaptive boosting (TrAdaboost) to address the issue of low accuracy in motor imagery (MI) recognition across subjects, thereby increasing the reliability of MI-based brain-computer interfaces (BCI) for cross-individual use. Using the autoregressive model, power spectral density and discrete wavelet transform, time-frequency domain features of MI can be obtained, while the filter bank common spatial pattern is used to extract spatial domain features, and multi-scale dispersion entropy is employed to extract nonlinear features. The IV-2a dataset from the 4th International BCI Competition was used for the binary classification task, with the pattern recognition model constructed by combining the improved TrAdaboost integrated learning algorithm with support vector machine (SVM), k nearest neighbor (KNN), and mind evolutionary algorithm-based back propagation (MEA-BP) neural network. The results show that the SVM-based TrAdaboost integrated learning algorithm has the best performance when 30% of the target domain instance data is migrated, with an average classification accuracy of 86.17%, a Kappa value of 0.723 3, and an AUC value of 0.849 8. These results suggest that the algorithm can be used to recognize MI signals across individuals, providing a new way to improve the generalization capability of BCI recognition models.
Due to the high complexity and subject variability of motor imagery electroencephalogram, its decoding is limited by the inadequate accuracy of traditional recognition models. To resolve this problem, a recognition model for motor imagery electroencephalogram based on flicker noise spectrum (FNS) and weighted filter bank common spatial pattern (wFBCSP) was proposed. First, the FNS method was used to analyze the motor imagery electroencephalogram. Using the second derivative moment as structure function, the ensued precursor time series were generated by using a sliding window strategy, so that hidden dynamic information of transition phase could be captured. Then, based on the characteristic of signal frequency band, the feature of the transition phase precursor time series and reaction phase series were extracted by wFBCSP, generating features representing relevant transition and reaction phase. To make the selected features adapt to subject variability and realize better generalization, algorithm of minimum redundancy maximum relevance was further used to select features. Finally, support vector machine as the classifier was used for the classification. In the motor imagery electroencephalogram recognition, the method proposed in this study yielded an average accuracy of 86.34%, which is higher than the comparison methods. Thus, our proposed method provides a new idea for decoding motor imagery electroencephalogram.
In the field of brain-computer interfaces (BCIs) based on functional near-infrared spectroscopy (fNIRS), traditional subject-specific decoding methods suffer from the limitations of long calibration time and low cross-subject generalizability, which restricts the promotion and application of BCI systems in daily life and clinic. To address the above dilemma, this study proposes a novel deep transfer learning approach that combines the revised inception-residual network (rIRN) model and the model-based transfer learning (TL) strategy, referred to as TL-rIRN. This study performed cross-subject recognition experiments on mental arithmetic (MA) and mental singing (MS) tasks to validate the effectiveness and superiority of the TL-rIRN approach. The results show that the TL-rIRN significantly shortens the calibration time, reduces the training time of the target model and the consumption of computational resources, and dramatically enhances the cross-subject decoding performance compared to subject-specific decoding methods and other deep transfer learning methods. To sum up, this study provides a basis for the selection of cross-subject, cross-task, and real-time decoding algorithms for fNIRS-BCI systems, which has potential applications in constructing a convenient and universal BCI system.
Brain-computer interface (BCI) systems based on steady-state visual evoked potential (SSVEP) have become one of the major paradigms in BCI research due to their high signal-to-noise ratio and short training time required by users. Fast and accurate decoding of SSVEP features is a crucial step in SSVEP-BCI research. However, the current researches lack a systematic overview of SSVEP decoding algorithms and analyses of the connections and differences between them, so it is difficult for researchers to choose the optimum algorithm under different situations. To address this problem, this paper focuses on the progress of SSVEP decoding algorithms in recent years and divides them into two categories—trained and non-trained—based on whether training data are needed. This paper also explains the fundamental theories and application scopes of decoding algorithms such as canonical correlation analysis (CCA), task-related component analysis (TRCA) and the extended algorithms, concludes the commonly used strategies for processing decoding algorithms, and discusses the challenges and opportunities in this field in the end.
Motor imagery (MI) is a mental process that can be recognized by electroencephalography (EEG) without actual movement. It has significant research value and application potential in the field of brain-computer interface (BCI) technology. To address the challenges posed by the non-stationary nature and low signal-to-noise ratio of MI-EEG signals, this study proposed a Riemannian spatial filtering and domain adaptation (RSFDA) method for improving the accuracy and efficiency of cross-session MI-BCI classification tasks. The approach addressed the issue of inconsistent data distribution between source and target domains through a multi-module collaborative framework, which enhanced the generalization capability of cross-session MI-EEG classification models. Comparative experiments were conducted on three public datasets to evaluate RSFDA against eight existing methods in terms of classification accuracy and computational efficiency. The experimental results demonstrated that RSFDA achieved an average classification accuracy of 79.37%, outperforming the state-of-the-art deep learning method Tensor-CSPNet (76.46%) by 2.91% (P < 0.01). Furthermore, the proposed method showed significantly lower computational costs, requiring only approximately 3 minutes of average training time compared to Tensor-CSPNet’s 25 minutes, representing a reduction of 22 minutes. These findings indicate that the RSFDA method demonstrates superior performance in cross-session MI-EEG classification tasks by effectively balancing accuracy and efficiency. However, its applicability in complex transfer learning scenarios remains to be further investigated.
Control at beyond-visual ranges is of great significance to animal-robots with wide range motion capability. For pigeon-robots, such control can be done by the way of onboard preprogram, but not constitute a closed-loop yet. This study designed a new control system for pigeon-robots, which integrated the function of trajectory monitoring to that of brain stimulation. It achieved the closed-loop control in turning or circling by estimating pigeons’ flight state instantaneously and the corresponding logical regulation. The stimulation targets located at the formation reticularis medialis mesencephali (FRM) in the left and right brain, for the purposes of left- and right-turn control, respectively. The stimulus was characterized by the waveform mimicking the nerve cell membrane potential, and was activated intermittently. The wearable control unit weighted 11.8 g totally. The results showed a 90% success rate by the closed-loop control in pigeon-robots. It was convenient to obtain the wing shape during flight maneuver, by equipping a pigeon-robot with a vivo camera. It was also feasible to regulate the evolution of pigeon flocks by the pigeon-robots at different hierarchical level. All of these lay the groundwork for the application of pigeon-robots in scientific researches.
Artificial intelligence-enhanced brain-computer interfaces (BCI) are expected to significantly improve the performance of traditional BCIs in multiple aspects, including usability, user experience, and user satisfaction, particularly in terms of intelligence. However, such AI-integrated or AI-based BCI systems may introduce new ethical issues. This paper first evaluated the potential of AI technology, especially deep learning, in enhancing the performance of BCI systems, including improving decoding accuracy, information transfer rate, real-time performance, and adaptability. Building on this, it was considered that AI-enhanced BCI systems might introduce new or more severe ethical issues compared to traditional BCI systems. These include the possibility of making users’ intentions and behaviors more predictable and manipulable, as well as the increased likelihood of technological abuse. The discussion also addressed measures to mitigate the ethical risks associated with these issues. It is hoped that this paper will promote a deeper understanding and reflection on the ethical risks and corresponding regulations of AI-enhanced BCIs.
Steady-state visual evoked potential (SSVEP) has been widely used in the research of brain-computer interface (BCI) system in recent years. The advantages of SSVEP-BCI system include high classification accuracy, fast information transform rate and strong anti-interference ability. Most of the traditional researches induce SSVEP responses in low and middle frequency bands as control signals. However, SSVEP in this frequency band may cause visual fatigue and even induce epilepsy in subjects. In contrast, high-frequency SSVEP-BCI provides a more comfortable and natural interaction despite its lower amplitude and weaker response. Therefore, it has been widely concerned by researchers in recent years. This paper summarized and analyzed the related research of high-frequency SSVEP-BCI in the past ten years from the aspects of paradigm and algorithm. Finally, the application prospect and development direction of high-frequency SSVEP were discussed and prospected.
This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects’ brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.
Patients with amyotrophic lateral sclerosis ( ALS ) often have difficulty in expressing their intentions through language and behavior, which prevents them from communicating properly with the outside world and seriously affects their quality of life. The brain-computer interface (BCI) has received much attention as an aid for ALS patients to communicate with the outside world, but the heavy device causes inconvenience to patients in the application process. To improve the portability of the BCI system, this paper proposed a wearable P300-speller brain-computer interface system based on the augmented reality (MR-BCI). This system used Hololens2 augmented reality device to present the paradigm, an OpenBCI device to capture EEG signals, and Jetson Nano embedded computer to process the data. Meanwhile, to optimize the system’s performance for character recognition, this paper proposed a convolutional neural network classification method with low computational complexity applied to the embedded system for real-time classification. The results showed that compared with the P300-speller brain-computer interface system based on the computer screen (CS-BCI), MR-BCI induced an increase in the amplitude of the P300 component, an increase in accuracy of 1.7% and 1.4% in offline and online experiments, respectively, and an increase in the information transfer rate of 0.7 bit/min. The MR-BCI proposed in this paper achieves a wearable BCI system based on guaranteed system performance. It has a positive effect on the realization of the clinical application of BCI.