• Corpus ID: 12236542

To Care or Not to Care : Analyzing the Caregiver in a Computational Gaze Following Framework

@inproceedings{Teuscher2004ToCO,
  title={To Care or Not to Care : Analyzing the Caregiver in a Computational Gaze Following Framework},
  author={Christof Teuscher and Jochen Triesch},
  year={2004}
}
We first present a computational framework of the emergence of gaze following that is based on a generic basic set of mechanisms. Whereas much attention has been focused so far on the study of the infant’s behavior, we systematically analyze the caregiver and show that he plays a crucial role in the development of gaze following in our model, especially for virtual infants with “developmental disorders”. We first create two nearly optimal infant parameter sets by means of an evolutionary… 

Figures and Tables from this paper

Gaze following: why (not) learn it?

TLDR
A computational model of the emergence of gaze following skills in infant-caregiver interactions is proposed and it is demonstrated that a specific Basic Set of structures and mechanisms is sufficient for gaze following to emerge.

The emergence of gaze following Gaze following : why ( not ) learn it ?

We propose a computational model of the emergence of gaze following skills in infant-caregiver interactions. The model is based on the idea that infants learn that monitoring their caregiver's

Modeling the emergence of gaze following

TLDR
Recent progress in developing computational models explaining why and how human infants learn gaze following are discussed and they are useful stepping stones for developing models of higher level social and cognitive skills.

Developmental Model of Joint Attention by Utilizing Contingency in Interaction

TLDR
This study aims at building a robot that acquires various forms of joint attentional behavior in an infantlike manner in order to provide a new understanding of the developmental process of Joint attention, and to realize the function of joint Attention in a robot.

Learning gaze following in space: a computational model

Following another person’s gaze in order to achieve joint attention is an important skill in human social interactions. This paper analyzes the gaze following problem and proposes a learning-based

Acquisition of joint attention through natural interaction utilizing motion cues

TLDR
The experimental result shows that gaze shift utilizing motion cues enables a robot to synchronize its own motion with human motion and to learn joint attention efficiently in about 20 min.

Multimodal Joint Attention Based on Mutual Exclusivity Principle

TLDR
This paper analyzed that the proposed method enabled mutually facilitative learning of a mapping for gaze-following and a label-to-object mapping by which the learner performs multimodal joint attention with its caregiver.

A Virtual Reality Platform for Modeling Cognitive Development

TLDR
A virtual reality platform for developing and evaluating embodied models of cognitive development is presented and how it is currently being used for constructing an embodied model of the emergence of gaze following in infant-caregiver interactions is described.

Multimodal joint attention through cross facilitative learning based on μX principle

TLDR
The mutual exclusivity selection principle (muX principle) is proposed for learning multi-modal mappings: selecting more mutually exclusive output leads experiences to make underdeveloped complementary mappings more disambiguated.

Chapter 16 : Origins of shared attention in human infants

Homo sapiens possess a unique behavioural system for social action and response, namely, language. Language permits action at a distance by transmitting messages with specifi c meanings from one

References

SHOWING 1-10 OF 61 REFERENCES

Combining embodied models and empirical research for understanding the development of shared attention

The capacity for shared attention is a cornerstone of human social intelligence. We propose that the development of shared attention depends on a proper interaction of motivational biases and

Infant-like Social Interactions between a Robot and a Human Caregiver

TLDR
This paper presents a mechanism for an autonomous robot to regulate the intensity of its social interactions with a human that enables the robot to react appropriately to both social stimuli and non-social stimuli while maintaining a suitable interaction intensity.

The capacity for joint visual attention in the infant

TLDR
The ability of the infant to respond successfully to such signals allows the mother to isolate and highlight a much wider range of environmental features than if the infant ignores her attention-directing efforts.

Learning gaze following in space: a computational model

Following another person’s gaze in order to achieve joint attention is an important skill in human social interactions. This paper analyzes the gaze following problem and proposes a learning-based

A constructive model for the development of joint attention

TLDR
The experimental results show that the proposed model makes the robot reproduce the developmental process of infants' joint attention, which could be one of the models to explain how infants develop the ability of joint attention.

Learning of Joint Visual Attention by Reinforcement Learning

In this paper, we propose a neural network model of joint visual attention learning that plays an important role in infant development, and we discuss previous studies of experimental psychology on

The eyes have it: the neuroethology, function and evolution of social gaze

  • N. Emery
  • Psychology, Biology
    Neuroscience & Biobehavioral Reviews
  • 2000

Joint attention : its origins and role in development

TLDR
This chapter discusses the development of Joint Attention in Premature Low Birth Weight Infants: Effects of Early Medical Complications and Maternal Attention-Directing Behaviors, and the role of affect and culture in this development.

Joint attention and lexical acquisition style

Recent research has documented systematic individual differences in early lexical development. The current study investigated the relation ship of these differences to differences in the way mothers

A multimodal learning interface for grounding spoken language in sensory perceptions

TLDR
A multimodal interface that learns to associate spoken language with perceptual features by being situated in users' everyday environments and sharing user-centric multisensory information.
...