• Corpus ID: 235489686

Transformer-based Spatial-Temporal Feature Learning for EEG Decoding

@article{Song2021TransformerbasedSF,
  title={Transformer-based Spatial-Temporal Feature Learning for EEG Decoding},
  author={Yonghao Song and Xueyu Jia and Lie Yang and Longhan Xie},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.11170}
}
At present, people usually use some methods based on convolutional neural networks (CNNs) for Electroencephalograph (EEG) decoding. However, CNNs have limitations in perceiving global dependencies, which is not adequate for common EEG paradigms with a strong overall relationship. Regarding this issue, we propose a novel EEG decoding method that mainly relies on the attention mechanism. The EEG data is firstly preprocessed and spatially filtered. And then, we apply attention transforming on the… 

Figures and Tables from this paper

A Channel Attention Based MLP-Mixer Network for Motor Imagery Decoding With EEG

TLDR
A novel channel attention based MLP-Mixer network (CAMLP-Net) is proposed for EEG-based MI decoding, which achieves superior classification performance over all the compared algorithms.

Multi-Channel EEG Emotion Recognition Based on Parallel Transformer and 3D-Convolutional Neural Network

TLDR
A multi-channel EEG emotion identification model based on the parallel transformer and three-dimensional convolutional neural networks (3D-CNN) that achieved greater accuracy in emotion recognition than other methods.

A Self-Supervised Learning Based Channel Attention MLP-Mixer Network for Motor Imagery Decoding

TLDR
A novel self-supervised learning (SSL) based channel attention MLP-Mixer network (S-CAMLP-Net) for MI decoding with EEG can effectively learn more long-range temporal information and global spatial features of EEG signals.

Rethinking CNN Architecture for Enhancing Decoding Performance of Motor Imagery-Based EEG Signals

TLDR
A novel model, called M–ShallowConvNet, is proposed, which solves the existing problems of the conventional model and demonstrates that performance improvement can be achieved with only a few small modifications that resolve the problems ofThe conventional model.

EEG temporal–spatial transformer for person identification

TLDR
This paper proposes a transformer-based approach for the EEG person identification task that extracts features in the temporal and spatial domains using a self-attention mechanism and shows that the method reaches state-of-the-art results.

ConTraNet: A single end-to-end hybrid network for EEG-based and EMG-based human machine interfaces

TLDR
This work introduces a single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that is equally useful for EEG-HMI and EMG-H MI paradigms and generalizes well as compared to the current state of the art algorithms.

TEMGNet: Deep Transformer-based Decoding of Upperlimb sEMG for Hand Gestures Recognition

TLDR
A novel Vision Transformer-based neural network architecture is proposed to classify and recognize upperlimb hand gestures from sEMG to be used for myocontrol of prostheses and is superior in terms of structural capacity while having seven times fewer trainable parameters.

TraHGR: Transformer for Hand Gesture Recognition via ElectroMyography

TLDR
This paper proposes a hybrid framework based on the transformer architecture, referred to as the Transformer for Hand Gesture Recognition (TraHGR), consisting of two parallel paths followed by a linear layer that acts as a fusion center to integrate the advantage of each module and provide robustness over different scenarios.

FS-HGR: Few-Shot Learning for Hand Gesture Recognition via Electromyography

TLDR
The main objective of this work is to design a modern DNN-based gesture detection model that relies on minimal training data while providing high accuracy and is motivated by the recent advances in Deep Neural Networks and their widespread applications in human-machine interfaces.

GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction

TLDR
This paper proposes a physiologically-inspired data augmentation method to improve performance and increase the robustness of heart disease detection based on ECG signals, and designs a ground metric that recognizes theerence betweenECG signals based on physiologically determined features.

References

SHOWING 1-10 OF 54 REFERENCES

A Multi-Branch 3D Convolutional Neural Network for EEG-Based Motor Imagery Classification

TLDR
A novel MI classification framework is first introduced, including a new 3D representation of EEG, a multi-branch 3D convolutional neural network (3D CNN) and the corresponding classification strategy, which reaches state-of-the-art classification kappa value level and significantly outperforms other algorithms.

A Convolutional Recurrent Attention Model for Subject-Independent EEG Signal Analysis

TLDR
A convolutional recurrent attention model (CRAM) that utilizes a convolutionAL neural network to encode the high-level representation of EEG signals and a recurrent attention mechanism to explore the temporal dynamics of the EEG signals as well as to focus on the most discriminative temporal periods is presented.

Improving EEG-Based Motor Imagery Classification via Spatial and Temporal Recurrent Neural Networks

TLDR
This paper proposed a pure RNNs-based parallel method for encoding spatial and temporal sequential raw data with bidirectional Long Short- Term Memory (bi-LSTM) and standard LSTM, respectively, and demonstrated the superior performance of this approach in the multi-class trial-wise movement intention classification scenario.

Motor Imagery Classification via Temporal Attention Cues of Graph Embedded EEG Signals

TLDR
A Graph-based Convolutional Recurrent Attention Model (G-CRAM) is proposed to explore EEG features across different subjects for motor imagery classification and achieves superior performance to state-of-the-art methods regarding recognition accuracy and ROC-AUC.

Hybrid deep neural network using transfer learning for EEG motor imagery decoding

An Attention-based Bi-LSTM Method for Visual Object Classification via EEG

Deep Temporal-Spatial Feature Learning for Motor Imagery-Based Brain–Computer Interfaces

TLDR
A deep learning approach termed filter-bank spatial filtering and temporal-spatial convolutional neural network (FBSF-TSCNN) for MI decoding, where the FBSF block transforms the raw EEG signals into an appropriate intermediate EEG presentation, and then the TSCNN block decodes the intermediate EEG signals.

TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition

TLDR
TSception is proposed, a multi-scale convolutional neural network that can classify emotions from EEG, which achieves higher classification accuracies and F1 scores than other methods in most of the experiments.
...