Learn More
Adaptation to their environment is a fundamental capability for living agents, from which autonomous robots could also benefit. This work proposes a connectionist architecture, DRAMA, for dynamic control and learning of autonomous robots. DRAMA stands for dynamical recurrent associative memory architecture. It is a time-delay recurrent neural network, using(More)
Communication is a desirable skill for robots. We describe a method of how these skills could be learned. A control architecture of con-nectionist model combining lifelong learning and predeened behaviours is developed and implemented in a physical system of two autonomous robots. A teaching scenario based on movement imitation is used to teach a basic non(More)
Godot is a mobile robot platform that serves as a testbed for the interface between a sophisticated low-level robot navigation and a symbolic high-level spoken dialogue system. The interesting feature of this combined system is that information flows in two directions: (1) The navigation system supplies landmark information from the cognitive map used for(More)
Sharing a common context of perception is a prerequisite in order for several agents to develop a common understanding of a language. We propose a method, based on a simple imitative strategy, for transmitting a vocabulary from a teacher agent to a learner agent. A learner robot follows and thus implicitly imitates the movements of a teacher robot. While(More)
— For mobile robots, as well as other learning systems, the ability to highlight unexpected features of their environment – novelty detection – is very useful. One particularly important application for a robot equipped with novelty detection is inspection, highlighting potential problems in an environment. In this paper two novelty filters, both of which(More)
Nonlinear dendritic processing appears to be a feature of biological neurons and would also be of use in many applications of artificial neural networks. This paper presents a model of an initially standard linear node which uses unsupervised learning to find clusters of inputs within which inactivity at one synapse can occlude the activity at the other(More)
— This paper describes the EvoTanks research project, a continuing attempt to develop strong AI players for a primitive 'Combat' style video game using evolutionary computational methods with artificial neural networks. A small but challenging feat due to the necessity for agent's actions to rely heavily on opponent behaviour. Previous investigation has(More)
Due to the unavoidable fact that a robot's sensors will be limited in some manner, it is entirely possible that it can find itself unable to distinguish between differing states of the world (the world is in effect partially observable). If reinforcement learning is used to train the robot, then this confounding of states can have a serious effect on its(More)
This paper outlines some ideas as to how robot learning experiments might best be designed. There are three principal ndings: (i) in order to evaluate robot learners we must employ multiple evaluation methods together; (ii) in order to measure in any absolute way the performance of a learning algorithm we must characterise the complexity of the underlying(More)