Mohammad Abu-Alqumsan

Learn More
A brain-computer interface (BCI) translates brain activity into commands to control devices or software. Common approaches are based on visual evoked potentials (VEP), extracted from the electroencephalogram (EEG) during visual stimulation. High information transfer rates (ITR) can be achieved using (i) steady-state VEP (SSVEP) or (ii) code-modulated VEP(More)
Mobile visual location recognition needs to be performed in real-time for location based services to be perceived as useful. We describe and validate an approach that eliminates the network delay by preloading partial visual vocabularies to the mobile device. Retrieval performance is significantly increased by composing partial vocabularies based on the(More)
Early stages in the development of a Brain-and-Body-Computer Interface controlled robot avatar are presented. The robot is aimed at performing well-defined daily tasks upon the choice and on behalf of a user. We built on recent advances in neuroscience, robotics and machine learning to demonstrate that it is possible to control a robot, accurately and(More)
The development of technological applications that allow people to control and embody external devices within social interaction settings represents a major goal for current and future brain-computer interface (BCI) systems.
OBJECTIVE Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as(More)
  • 1