• Corpus ID: 233481286

Distilling EEG Representations via Capsules for Affective Computing

  title={Distilling EEG Representations via Capsules for Affective Computing},
  author={Guangyi Zhang and Ali Etemad},
Affective computing with Electroencephalogram (EEG) is a challenging task that requires cumbersome models to effectively learn the information contained in large-scale EEG signals, causing difficulties for real-time smart-device deployment. In this paper, we propose a novel knowledge distillation pipeline to distill EEG representations via capsule-based architectures for both classification and regression tasks. Our goal is to distill information from a heavy model to a lightweight model for… 

Figures and Tables from this paper

Learning Oculomotor Behaviors from Scanpath
A novel method that creates rich representations of oculomotor scanpaths to facilitate the learning of downstream tasks is developed, which outperforms baseline approaches and traditional scanpath methods in autism spectrum disorder and viewed-stimulus classification tasks.


RFNet: Riemannian Fusion Network for EEG-based Brain-Computer Interfaces
The novel Riemannian Fusion Network (RFNet), a deep neural architecture for learning spatial and temporal information from Electroencephalogram (EEG) for a number of different EEG-based Brain Computer Interface (BCI) tasks and applications, approaches the state-of-the-art on one dataset (SEED) and outperforms other methods on the other three datasets.
A Multimodal Approach to Estimating Vigilance Using EEG and Forehead EOG
A multimodal approach to estimating vigilance is proposed by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training and introducing continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency.
Dynamic Routing Between Capsules
It is shown that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits.
Distilling the Knowledge in a Neural Network
This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks
The experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals, and the performance of deep models with shallow models is compared.
A Regression Method With Subnetwork Neurons for Vigilance Estimation Using EOG and EEG
In recent years, it has been observed that there is an increasing rate of road accidents due to the low vigilance of drivers. Thus, the estimation of drivers’ vigilance state plays a significant role
CapsField: Light Field-Based Face and Expression Recognition in the Wild Using Capsule Routing
A new deep face and expression recognition solution, called CapsField, is proposed, based on a convolutional neural network and an additional capsule network that utilizes dynamic routing to learn hierarchical relations between capsules.
Hyperbolic Capsule Networks for Multi-Label Classification
Compared with the state-of-the-art methods, HyperCaps significantly improves the performance of MLC especially on tail labels and to efficiently handle large-scale MLC datasets.
Regularizing Class-Wise Predictions via Self-Knowledge Distillation
A new regularization method is proposed that penalizes the predictive distribution between similar samples during training and results in regularizing the dark knowledge of a single network by forcing it to produce more meaningful and consistent predictions in a class-wise manner.
Spatio-Temporal Graph for Video Captioning With Knowledge Distillation
A novel spatio-temporal graph model for video captioning that exploits object interactions in space and time and an object-aware knowledge distillation mechanism, in which local object information is used to regularize global scene features are proposed.