Gillian M. Hayes

Learn More
The sparse distributed memory (SDM) was originally developed to tackle the problem of storing large binary data patterns. The model succeeded well in storing random input data. However, its efficiency, particularly in handling nonrandom data, was poor. In its original form it is a static and inflexible system. Most of the recent work on the SDM has(More)
This paper introduces an integration of reinforcement learning and behavior-based control designed to produce real-time learning in situated agents. The model layers a distributed and asynchronous reinforcement learning algorithm over a learned topological map and standard behavioral substrate to create a reinforcement learning complex. The topological map(More)
This paper introduces a novel study on the sense of valency as a vital process for achieving adaptation in agents through evolution and developmental learning. Unlike previous studies, we hypothesise that behaviour-related information must be underspecified in the genes and that additional mechanisms such as valency modulate final behavioural responses.(More)
The goal of an Evolutionary Algorithm(EA) is to find the optimal solution to a given problem by evolving a set of initial potential solutions. When the problem is multi-modal, an EA will often become trapped in a suboptimal solution(premature convergence). The Scouting-Inspired Evolutionary Algorithm(SEA) is a relatively new technique that avoids premature(More)
2 Abstract This paper introduces a formalization of the dynamics between sensorimotor interaction and home-ostasis, integrated in a single architecture to learn object affordances of consummatory behaviours. We also describe the principles necessary to learn grounded knowledge in the context of an agent and its surrounding environment, which we use to(More)
Reinforcement learning (RL) in the context of artificial agents is typically used to produce be-havioural responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a(More)
application of external forces to a robot can disrupt its normal operation and cause localisation errors. We present a novel approach for detecting external disturbances based on optic flow without the use of egomotion information. Even though this research moderately validates the efficacy of the model we argue that its application is plausible to a large(More)
We describe a robot vision system which produces a depth map in real time by means of motion parallax or kinetic depth. A video camera is held by a robot which moves so that a given point in space is kept fixated on the centre of the camera's imaging surface. The optical flow is calculated in a Datacube MaxVideo system and a full-frame depth map is produced(More)
Evolutionary Algorithms (EAs) are common optimization techniques based on the concept of Darwinian evolution. During the search for the global optimum of a search space, a traditional EA will often become trapped in a local optimum. The Scouting-Inspired Evolutionary Algorithms (SEAs) are a recently–introduced family of EAs that use a cross–generational(More)