Gillian M. Hayes

Learn More
The sparse distributed memory (SDM) was originally developed to tackle the problem of storing large binary data patterns. The model succeeded well in storing random input data. However, its efficiency, particularly in handling nonrandom data, was poor. In its original form it is a static and inflexible system. Most of the recent work on the SDM has(More)
Learning affordances can be defined as learning action potentials, i.e., learning that an object exhibiting certain regularities offers the possibility of performing a particular action. We propose a method to endow an agent with the capability of acquiring this knowledge by relating the object invariants with the potentiality of performing an action via(More)
This paper introduces an integration of reinforcement learning and behavior-based control designed to produce real-time learning in situated agents. The model layers a distributed and asynchronous reinforcement learning algorithm over a learned topological map and standard behavioral substrate to create a reinforcement learning complex. The topological map(More)
2 Abstract This paper introduces a formalization of the dynamics between sensorimotor interaction and home-ostasis, integrated in a single architecture to learn object affordances of consummatory behaviours. We also describe the principles necessary to learn grounded knowledge in the context of an agent and its surrounding environment, which we use to(More)
This paper introduces a novel study on the sense of valency as a vital process for achieving adaptation in agents through evolution and developmental learning. Unlike previous studies, we hypothesise that behaviour-related information must be underspecified in the genes and that additional mechanisms such as valency modulate final behavioural responses.(More)
Reinforcement learning (RL) in the context of artificial agents is typically used to produce be-havioural responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a(More)
The goal of an Evolutionary Algorithm(EA) is to find the optimal solution to a given problem by evolving a set of initial potential solutions. When the problem is multi-modal, an EA will often become trapped in a suboptimal solution(premature convergence). The Scouting-Inspired Evolutionary Algorithm(SEA) is a relatively new technique that avoids premature(More)
This paper introduces an integration of reinforcement learning and behavior-based control designed to produce real-time learning in situated agents. The model layers a distributed and asynchronous reinforcement learning algorithm over a learned topological map and standard behavioral substrate to create a reinforcement learning complex. The topological map(More)
Robot positioning is an important function of autonomous intelligent robots. However, the application of external forces to a robot can disrupt its normal operation and cause localisation errors. We present a novel approach for detecting external disturbances based on optic flow without the use of egomotion information. Even though this research moderately(More)