Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
@article{Delvigne2022WhereIM, title={Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity}, author={Victor Delvigne and No{\'e} Tits and Luca La Fisca and Nathan Hubens and Antoine Maiorca and Hazem Wannous and Thierry Dutoit and Jean-Philippe Vandeborre}, journal={ArXiv}, year={2022}, volume={abs/2201.03902} }
Visual attention estimation is an active field of research at the crossroads of different disciplines: computer vision, artificial intelligence and medicine. One of the most common approaches to estimate a saliency map representing attention is based on the observed images. In this paper, we show that visual attention can be retrieved from EEG acquisition. The results are comparable to traditional predictions from observed images, which is of great interest. For this purpose, a set of signals…
References
SHOWING 1-10 OF 27 REFERENCES
Competitive brain activity in visual attention
- Psychology, BiologyCurrent Opinion in Neurobiology
- 1997
A Benchmark of Computational Models of Saliency to Predict Human Fixations
- Computer Science
- 2012
A benchmark data set containing 300 natural images with eye tracking data from 39 observers is proposed to compare model performances and it is shown that human performance increases with the number of humans to a limit.
Characterization of electroencephalography signals for estimating saliency features in videos
- Computer ScienceNeural Networks
- 2018
Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics
- Computer Science2013 IEEE International Conference on Computer Vision
- 2013
This paper compares the ranking of 12 state-of-the art saliency models using 12 similarity metrics and shows that some of the metrics are strongly correlated leading to a redundancy in the performance metrics reported in the available benchmarks.
ThoughtViz: Visualizing Human Thoughts Using Generative Adversarial Network
- Computer ScienceACM Multimedia
- 2018
Performance analysis carried out on three different datasets show that the proposed approach is able to effectively generate images from thoughts of a person, and it is demonstrated that EEG signals encode explicitly cues from thoughts which can be effectively used for generating semantically relevant visualizations.
Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks
- Computer ScienceICLR
- 2016
This work transforms EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information, and trains a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images.
Contextual Encoder-Decoder Network for Visual Saliency Prediction
- Computer ScienceNeural Networks
- 2020
EEGNet: A Compact Convolutional Neural Network for EEG-based Brain-Computer Interfaces
- Computer Science
- 2021
The results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks, suggesting that the observed performances were not due to artifact or noise sources in the data.
Generative Adversarial Networks Conditioned by Brain Signals
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
The results show that for classes represented by well-defined visual patterns, the generated images are realistic and highly resemble those evoking the EEG signals used for conditioning GANs, resulting in an actual reading-the-mind process.