Learn More
—In this paper, we investigate stable patterns of electroencephalogram (EEG) over time for emotion recognition using a machine learning approach. Up to now, various findings of activated patterns associated with different emotions have been reported. However, their stability over time has not been fully investigated yet. In this paper, we focus on(More)
In recent years, there are many great successes in using deep architectures for unsupervised feature learning from data, especially for images and speech. In this paper, we introduce recent advanced deep learning models to classify two emotional categories (positive and negative) from EEG data. We train a deep belief network (DBN) with differential entropy(More)
This paper presents a new emotion recognition method which combines electroencephalograph (EEG) signals and pupillary response collected from eye tracker. We select 15 emotional film clips of 3 categories (positive, neutral and negative). The EEG signals and eye tracking data of five participants are recorded, simultaneously, while watching these videos. We(More)
In this paper, we adopt a multimodal emotion recognition framework by combining eye movements and electroencephalography (EEG) to enhance emotion recognition. The main contributions of this paper are twofold. a) We investigate sixteen eye movements related to emotions and identify the intrinsic patterns of these eye movements for three emotional states:(More)
Addressing the structural and functional variability between subjects for robust affective brain-computer interface (aBCI) is challenging but of great importance, since the calibration phase for aBCI is time-consuming. In this paper, we propose a subject transfer framework for electroencephalogram (EEG)-based emotion recognition via component analysis. We(More)
— Various studies have shown that the traditional electrooculograms (EOGs) are effective for driving fatigue detection. However, the electrode placement of the traditional EOG recording method is around eyes, which may disturb the subjects' activities, and is not convenient for practical applications. To deal with this problem, we propose a novel electrode(More)
Individual differences across subjects and non-stationary characteristic of electroencephalography (EEG) limit the generalization of affective brain-computer interfaces in real-world applications. On the other hand, it is very time consuming and expensive to acquire a large number of subject-specific labeled data for learning subject-specific models. In(More)
To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt mul-timodal deep learning approach to construct af-fective models from multiple physiological signals. For unimodal enhancement task, we indicate that the best recognition accuracy of 82.11% on SEED dataset is achieved(More)
This study aims at finding the relationship between EEG signals and human emotional states. Movie clips are used as stimuli to evoke positive, neutral and negative emotions of subjects. We introduce a new effective classifier named discriminative graph regularized extreme learning machine (GELM) for EEG-based emotion recognition. The average classification(More)