Learn More
— In this work we propose a new paradigm for learning coordination in multi-agent systems. This approach is based on social interaction of people, specially in the fact that people communicate to each other what they think about their actions and this opinion has some influence in the behavior of each other. We propose a model in which multi-agents learn to(More)
Distributed systems such as clusters of PCs are low-cost alternatives for running parallel rendering systems, but they have high communication overhead, and limited memory capacity on each processing node. In this paper we focus on the strategy for distributing the parallel rendering work among the PCs. A good distribution strategy provides better load(More)
This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an(More)
Nowadays we have several cultural spaces accessible in the Internet. These synthetic worlds mix different medias (text, audio, image, video) and attractive user interface resources (as 3D ones). Not closed to presentation environments these applications offer to user other facilities: personalization, interactive views, additional information and(More)
In this work we propose two behavioraly active policies for attentional control. These policies must act based on a multi-modal sensory feedback. Two approaches are used to derive the policies: the first one follows a simple straightforward strategy and the second one uses Q-learning to learn a policy based on the perceptual state of the system. As(More)
This work describes the architecture of an integrated multi-modal sensory (vision and touch) computational system. We propose to use an approach based on robotics control theory that is motivated by biology and developmental psychology, in order to integrate the haptic and visual information processing. We show some results carried out in simulation and(More)
We propose a new approach to reduce and abstract visual data useful for robotics applications. Basically, a moving Fovea in combination with a multi-resolution representation is created from a pair of input images given by a stereo head, that reduces hundreds of times the amount of information from the original images. With this new theoretical approach we(More)
1 Introduction In this work, vision and touch (artiicial) senses are integrated in a cooperative active system. Multi-modal sensory information acquired on-line is used by a robotic agent to perform real-time tasks involving categorization of objects. The visual-touch system proposed is able to foveate (verge) the eyes onto an object, to move the arms to(More)