Gillian M. Hayes

Learn More
We do not exist alone. Humans and most other animal species live in societies where the behaviour of an individual influences and is influenced by other members of the society. Within societies, an individual learns not only on its own, through classical conditioning and reinforcement, but to a large extent through its conspecifics, by observation and(More)
Renal brush border membrane sodium/phosphate (Na/Pi)-cotransport activity is inhibited by hormonal mechanisms involving activation of protein kinases A and C. The recently cloned rat renal Na/Pi cotransporter (NaPi-2) contains several protein kinase C but no protein kinase A consensus sites [17, 20]. In the present study we have expressed wild type and(More)
Imitation 1 and communication behaviours are important means of interaction between humans and robots. In experiments on robot teaching by demonstration, imitation and communication behaviours can be used by the demonstrator to drive the robot's attention to the demonstrated task. In a children game, they play an important role to engage the interaction(More)
The sparse distributed memory (SDM) was originally developed to tackle the problem of storing large binary data patterns. The model succeeded well in storing random input data. However, its efficiency, particularly in handling nonrandom data, was poor. In its original form it is a static and inflexible system. Most of the recent work on the SDM has(More)
This paper introduces an integration of reinforcement learning and behavior-based control designed to produce real-time learning in situated agents. The model layers a distributed and asynchronous reinforcement learning algorithm over a learned topological map and standard behavioral substrate to create a reinforcement learning complex. The topological map(More)
Learning affordances can be defined as learning action potentials, i.e., learning that an object exhibiting certain regularities offers the possibility of performing a particular action. We propose a method to endow an agent with the capability of acquiring this knowledge by relating the object invariants with the potentiality of performing an action via(More)
Within the context of two sets of robotic experiments we have performed, we examine some representational and algorithmic issues that need to be addressed in order to equip robots with the capacity to imitate. We suggest that some of the difficulties might be eased by placing imitation architectures within a wider social context.
There is a spectrum of methods for learning robot control. At one end there are model-free methods (eg. Q-learning, AHC, bucket brigade), and at the other there are model-based methods, (eg. dynamic programming by value or policy iteration). The advantage of one technique is the weakness of the other. Model-based methods use experience eeectively, but are(More)
Reinforcement learning (RL) in the context of artificial agents is typically used to produce behavioural responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a positive(More)