• Corpus ID: 245853859

Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity

@article{Delvigne2022WhereIM,
  title={Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity},
  author={Victor Delvigne and No{\'e} Tits and Luca La Fisca and Nathan Hubens and Antoine Maiorca and Hazem Wannous and Thierry Dutoit and Jean-Philippe Vandeborre},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.03902}
}
Visual attention estimation is an active field of research at the crossroads of different disciplines: computer vision, artificial intelligence and medicine. One of the most common approaches to estimate a saliency map representing attention is based on the observed images. In this paper, we show that visual attention can be retrieved from EEG acquisition. The results are comparable to traditional predictions from observed images, which is of great interest. For this purpose, a set of signals… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 27 REFERENCES
Competitive brain activity in visual attention
A Benchmark of Computational Models of Saliency to Predict Human Fixations
TLDR
A benchmark data set containing 300 natural images with eye tracking data from 39 observers is proposed to compare model performances and it is shown that human performance increases with the number of humans to a limit.
Characterization of electroencephalography signals for estimating saliency features in videos
Saliency and Human Fixations: State-of-the-Art and Study of Comparison Metrics
TLDR
This paper compares the ranking of 12 state-of-the art saliency models using 12 similarity metrics and shows that some of the metrics are strongly correlated leading to a redundancy in the performance metrics reported in the available benchmarks.
ThoughtViz: Visualizing Human Thoughts Using Generative Adversarial Network
TLDR
Performance analysis carried out on three different datasets show that the proposed approach is able to effectively generate images from thoughts of a person, and it is demonstrated that EEG signals encode explicitly cues from thoughts which can be effectively used for generating semantically relevant visualizations.
Components of bottom-up gaze allocation in natural images
Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks
TLDR
This work transforms EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information, and trains a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images.
EEGNet: A Compact Convolutional Neural Network for EEG-based Brain-Computer Interfaces
TLDR
The results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks, suggesting that the observed performances were not due to artifact or noise sources in the data.
Generative Adversarial Networks Conditioned by Brain Signals
TLDR
The results show that for classes represented by well-defined visual patterns, the generated images are realistic and highly resemble those evoking the EEG signals used for conditioning GANs, resulting in an actual reading-the-mind process.
...
...