The robot in the mirror

@article{Steels2008TheRI,
  title={The robot in the mirror},
  author={Luc L. Steels and Michael Spranger},
  journal={Connection Science},
  year={2008},
  volume={20},
  pages={337 - 358}
}
Humans maintain a body image of themselves, which plays a central role in controlling bodily movement, planning action, recognising and naming actions performed by others, and requesting or executing commands. [] Key Method They start without any prior inventory of names, without categories for visually recognising body movements of others, and without knowing the relation between visual images of motor behaviours carried out by others and their own motor behaviours.

Emergent mirror systems for body language

This chapter investigates how a vocabulary for talking about body actions can emerge in a population of grounded autonomous agents instantiated as humanoid robots. The agents play a Posture Game

Can Body Language Shape Body Image?

If the robot has the capacity to ‘imagine’ the behavior of his own body through selfsimulation, he is better able to guess what action corresponds to a visual image produced by another robot and thus guess the meaning of an unknown word, leading to a significant speed-up in the way individual agents are able to coordinate visual categories, motor behaviors and language.

Learning by seeing - associative learning of visual features through mental simulation of observed action

This article presents how an internal body model serving motor control tasks can be recruited for learning to recognize movements performed by another agent and shows that the mapping can be bootstrapped by the observing agent from the sequence of visual input features.

The Semantics of SIT, STAND, and LIE Embodied in Robots

The Semantics of SIT, STAND, and LIE Embodied in Robots Michael Spranger (spranger@csl.sony.fr) SONY Computer Science Laboratory Paris, 6, Rue Amyot, 75005 Paris, France Martin Loetzsch

Emergent Action Language on Real Robots

This chapter describes evolutionary language game experiments exploring how these competences originate, can be carried out and acquired, by real robots, using evolutionary language games and a whole systems approach.

Robot in the Mirror: Toward an Embodied Computational Model of Mirror Self-Recognition

The core of the technical contribution is learning the appearance representation and visual novelty detection by means of learning the generative model of the face with deep auto-encoders and exploiting the prediction error.

Grounded Internal Body Models for Communication: Integration of Sensory and Motor Spaces for Mediating Conceptualization

The article introduces the biologically inspired neural network approach which can subserve these different functions in different contexts, and how this internal model can be recruited in order to mediate between different sensory domains.

A model for production, perception, and acquisition of actions in face-to-face communication

A model is proposed that elucidates the underlying biological mechanisms of action production, action perception, and action acquisition in all domains of face-to-face communication and can be used as theoretical framework for empirical analysis or simulation with embodied conversational agents, and thus for advanced human–computer interaction technologies.

Modeling the Formation of Language: Embodied Experiments

  • L. Steels
  • Psychology
    Evolution of Communication and Language in Embodied Agents
  • 2010
This chapter gives an overview of different experiments that have been performed to demonstrate how a symbolic communication system, including its underlying ontology, can arise in situated embodied

Learning Words by Imitating

This chapter proposes a single imitation-learning algorithm capable of simultaneously learning linguistic as well as nonlinguistic tasks, without demonstrations being labeled. A human demonstrator

References

SHOWING 1-10 OF 41 REFERENCES

Can Body Language Shape Body Image?

If the robot has the capacity to ‘imagine’ the behavior of his own body through selfsimulation, he is better able to guess what action corresponds to a visual image produced by another robot and thus guess the meaning of an unknown word, leading to a significant speed-up in the way individual agents are able to coordinate visual categories, motor behaviors and language.

Neural Simulation of Action: A Unifying Mechanism for Motor Cognition

The hypothesis that the motor system is part of a simulation network that is activated under a variety of conditions in relation to action, either self-intended or observed from other individuals, will be developed.

Distributed, predictive perception of actions: a biologically inspired robotics architecture for imitation and learning

A cognitive architecture for action recognition and imitation that uses the motor systems of the imitator in a dual role, both for generating actions and for understanding actions when performed by others is described.

Language within our grasp

Premotor cortex and the recognition of motor actions.

Imitation: a means to enhance learning of a synthetic protolanguage in autonomous robots

The sharing of a similar perceptual context between imitator and imitatee creates a meaningful social context onto which language, that is the development of a common means of symbolic communication, can develop.

How the body shapes the way we think - a new view on intelligence

In How the Body Shapes the Way The authors Think, Rolf Pfeifer and Josh Bongard demonstrate that thought is not independent of the body but is tightly constrained, and at the same time enabled, by it.

Perspective alignment in spatial language

It is shown in a series of robotic experiments which cognitive mechanisms are necessary and sufficient to achieve successful spatial language and why and how perspective alignment can take place, either implicitly or based on explicit marking.

coordinating perceptually grounded categories through language: a case study for colour

A number of models are proposed to examine through which mechanisms a population of autonomous agents could arrive at a repertoire of perceptually grounded categories that is sufficiently shared to allow successful communication.

Shakey the Robot

Abstract : From 1960 through 1972, the Artificial Intelligence Center at SRI conducted research on a mobile robot system nicknamed "Shakey." Endowed with a limited ability to perceive and model its