Richard Veale

Learn More
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a 'saliency map' topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological(More)
Natural human-like human-robot interactions require many functional capabilities from a robot that have to be reflected in architectural components in the robotic control architecture. In particular, various mechanisms for producing social behaviors, goal-oriented cognition, and robust intelligence are required. In this paper, we present an overview of the(More)
Humans are remarkably good at recognizing spoken language, even in very noisy environments. Yet, artificial speech recognizers do not reach human level performance, nor do they typically even attempt to model human speech processing. In this paper, we introduce a biologically plausible neural model of real-time spoken phrase recognition which shows how the(More)
This paper presents a hybrid cognitive model engaged in experiments demonstrating a successful mechanism for applying top-down contextual bias to a neural speech recognition system to improve its performance. The hybrid model includes a model of social dialogue moves, which it uses to selectively bias word recognition probabilities at a low level in the(More)
Zero resource spoken term discovery in continuous speech is the discovery of repeated patterns in acoustic signals without any higher level linguistic information. These patterns are then combined to define the compositional units of that speech. We describe and implement an algorithm that tags similar subsequences among sequences of acoustic features. We(More)
Recently the authors showed that a computational model of visual saliency could account for changes in gaze behavior of monkeys with damage in the primary visual cortex. Here we propose a neural prosthesis to restore eye gaze behavior by electrically stimulating the superior colliculus to drive visual attention. The saliency computational model is used to(More)
This paper presents evidence that spiking neuron models of parts of the human auditory system demonstrate habituation to real auditory word stimuli. This is accomplished via the simple addition of a model of spike-timing dependent plasticity to synapses. This result is interesting because the base neural circuit has also been used for pragmatically useful(More)
Infants are able to adaptively associate auditory stimuli with visual stimuli even in their first year of life, as demonstrated by multimodal habituation studies. Different from language acquisition during later developmental stages, this adaptive learning in young infants is temporary and still very much stimulus-driven. Hence, temporal aspects of(More)
The ERTS robotic golf cart is controlled using a Schema Architecture to follow a path of GPS waypoints and to avoid obstacles. The theory behind a bare-bones version of the architecture is presented, followed by actual testing data in simulation and on the real golf cart. Finally, more involved versions of the architecture which were implemented, but not(More)
Brain damage to visual cortex causes hemianopia, along with significant changes in looking behavior. Previously, we have proposed a visual attention neuro-prosthesis to correct the distribution of visual aattention in patients with damage to visual cortex. The prosthesis consists of an eye tracker, a forward-facing camera, and electric micro stimulation in(More)