• Corpus ID: 235489686

Transformer-based Spatial-Temporal Feature Learning for EEG Decoding

  title={Transformer-based Spatial-Temporal Feature Learning for EEG Decoding},
  author={Yonghao Song and Xueyu Jia and Lie Yang and Longhan Xie},
At present, people usually use some methods based on convolutional neural networks (CNNs) for Electroencephalograph (EEG) decoding. However, CNNs have limitations in perceiving global dependencies, which is not adequate for common EEG paradigms with a strong overall relationship. Regarding this issue, we propose a novel EEG decoding method that mainly relies on the attention mechanism. The EEG data is firstly preprocessed and spatially filtered. And then, we apply attention transforming on the… 

Figures and Tables from this paper

A Channel Attention Based MLP-Mixer Network for Motor Imagery Decoding With EEG

A novel channel attention based MLP-Mixer network (CAMLP-Net) is proposed for EEG-based MI decoding, which achieves superior classification performance over all the compared algorithms.

A novel hybrid CNN-Transformer model for EEG Motor Imagery classification

A hybrid model that combines a convolutional neural network with the Transformer for decoding motor imagery EEG signals and outperforms the state-of-the-art methods is proposed, indicating that the CNN-Transformer model is a competing strategy.

Multi-Channel EEG Emotion Recognition Based on Parallel Transformer and 3D-Convolutional Neural Network

A multi-channel EEG emotion identification model based on the parallel transformer and three-dimensional convolutional neural networks (3D-CNN) that achieved greater accuracy in emotion recognition than other methods.

A Self-Supervised Learning Based Channel Attention MLP-Mixer Network for Motor Imagery Decoding

A novel self-supervised learning (SSL) based channel attention MLP-Mixer network (S-CAMLP-Net) for MI decoding with EEG can effectively learn more long-range temporal information and global spatial features of EEG signals.

Rethinking CNN Architecture for Enhancing Decoding Performance of Motor Imagery-Based EEG Signals

A novel model, called M–ShallowConvNet, is proposed, which solves the existing problems of the conventional model and demonstrates that performance improvement can be achieved with only a few small modifications that resolve the problems ofThe conventional model.

EEG temporal–spatial transformer for person identification

This paper proposes a transformer-based approach for the EEG person identification task that extracts features in the temporal and spatial domains using a self-attention mechanism and shows that the method reaches state-of-the-art results.

Emotional Stress Recognition Using Electroencephalogram Signals Based on a Three-Dimensional Convolutional Gated Self-Attention Deep Neural Network

A method to improve the accuracy of emotional stress recognition using multi-channel electroencephalogram (EEG) signals that combines a three-dimensional (3D) convolutional neural network with an attention mechanism to build a 3D Convolutional gated self-attention neural network.

ConTraNet: A single end-to-end hybrid network for EEG-based and EMG-based human machine interfaces

This work introduces a single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that is equally useful for EEG-HMI and EMG-H MI paradigms and generalizes well as compared to the current state of the art algorithms.

Continuous Seizure Detection Based on Transformer and Long-Term iEEG

An end-to-end model that included convolution and transformer layers was proposed that did not need feature engineering or format transformation of the original multi-channel time series and improved the explainability of the model.

TEMGNet: Deep Transformer-based Decoding of Upperlimb sEMG for Hand Gestures Recognition

A novel Vision Transformer-based neural network architecture is proposed to classify and recognize upperlimb hand gestures from sEMG to be used for myocontrol of prostheses and is superior in terms of structural capacity while having seven times fewer trainable parameters.



A Multi-Branch 3D Convolutional Neural Network for EEG-Based Motor Imagery Classification

A novel MI classification framework is first introduced, including a new 3D representation of EEG, a multi-branch 3D convolutional neural network (3D CNN) and the corresponding classification strategy, which reaches state-of-the-art classification kappa value level and significantly outperforms other algorithms.

A Convolutional Recurrent Attention Model for Subject-Independent EEG Signal Analysis

A convolutional recurrent attention model (CRAM) that utilizes a convolutionAL neural network to encode the high-level representation of EEG signals and a recurrent attention mechanism to explore the temporal dynamics of the EEG signals as well as to focus on the most discriminative temporal periods is presented.

Improving EEG-Based Motor Imagery Classification via Spatial and Temporal Recurrent Neural Networks

This paper proposed a pure RNNs-based parallel method for encoding spatial and temporal sequential raw data with bidirectional Long Short- Term Memory (bi-LSTM) and standard LSTM, respectively, and demonstrated the superior performance of this approach in the multi-class trial-wise movement intention classification scenario.

Motor Imagery Classification via Temporal Attention Cues of Graph Embedded EEG Signals

A Graph-based Convolutional Recurrent Attention Model (G-CRAM) is proposed to explore EEG features across different subjects for motor imagery classification and achieves superior performance to state-of-the-art methods regarding recognition accuracy and ROC-AUC.

Learning Temporal Information for Brain-Computer Interface Using Convolutional Neural Networks

This framework outperforms the best classification method in the literature on the BCI competition IV-2a 4-class MI data set by 7% increase in average subject accuracy and by studying the convolutional weights of the trained networks, it gains an insight into the temporal characteristics of EEG.

Hybrid deep neural network using transfer learning for EEG motor imagery decoding

An Attention-based Bi-LSTM Method for Visual Object Classification via EEG

Deep Temporal-Spatial Feature Learning for Motor Imagery-Based Brain–Computer Interfaces

A deep learning approach termed filter-bank spatial filtering and temporal-spatial convolutional neural network (FBSF-TSCNN) for MI decoding, where the FBSF block transforms the raw EEG signals into an appropriate intermediate EEG presentation, and then the TSCNN block decodes the intermediate EEG signals.