Mark Elshaw

Learn More
Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that 'mirror' neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head(More)
In the MirrorBot project we examine perceptual processes using models of cortical assemblies and mirror neurons to explore the emergence of semantic representations of actions, percepts and concepts in a neural robot. The hypothesis under investigation is whether a neural model will produce a life-like perception system for actions. In this context we focus(More)
In this paper we describe two models for neural grounding of robotic language processing in actions. These models are inspired by concepts of the mirror neuron system in order to produce learning by imitation by combining high-level vision, language and motor command inputs. The models learn to perform and recognise three behaviours, 'go', 'pick' and(More)
We describe a hybrid generative and predictive model of the motor cortex. The generative model is related to the hierarchically directed cortico-cortical (or thalamo-cortical) connections and unsupervised training leads to a topographic and sparse hidden representation of its sensory and motor input. The predictive model is related to lateral intra-area and(More)
In this paper we focus on how instructions for actions can be modelled in a self-organising memory. Our approach draws from the concepts of regional distributed modularity and self-organisation. We describe a self-organising model that clusters action representations into different locations dependent on the body part they are related to. In the first case(More)
— Imitation learning offers a valuable approach for developing intelligent robot behaviour. We present an imitation approach based on an associator neural network inspired by brain modularity and mirror neurons. The model combines multimodal input based on higher-level vision, motor control and language so that a simulated student robot is able to learn(More)
Objects of interest are represented in the brain simultaneously in different frames of reference. Knowing the positions of one's head and eyes, for example, one can compute the body-centred position of an object from its perceived coordinates on the retinae. We propose a simple and fully trained attractor network which computes head-centred coordinates(More)