Georg Layher

Learn More
The analysis of affective or communicational states in human-human and human-computer interaction (HCI) using automatic machine analysis and learning approaches often suffers from the simplicity of the approaches or that very ambitious steps are often tried to be taken at once. In this paper, we propose a generic framework that overcomes many difficulties(More)
In this contribution we extend existing methods for head pose estimation and investigate the use of local image phase for gaze detection. Moreover we describe how a small database of face images with given ground truth for head pose and gaze direction was acquired. With this database we compare two different computational approaches for extracting the head(More)
  • U Weidenbacher, G Layher, P.-M Strauss, H Neumann, ulrich Weidenbacher, Georg Layher +1 other
  • 2007
Within the past decade, many computational approaches have been developed to estimate gaze directions of persons based on their facial appearance. Most researchers used common face datasets with only a limited representation of different head poses to train and verify their algorithms. Moreover, in most datasets, faces have neither a defined gaze direction,(More)
We investigate the influence of audiovisual features on the perception of speaking style and performance of politicians, utilizing a large publicly available dataset of German parliament recordings. We conduct a human perception experiment involving eye-tracker data to evaluate human ratings as well as behavior in two separate conditions, i.e. audiovisual(More)
The detection and categorization of animate motions is a crucial task underlying social interaction and perceptual decision-making. Neural representations of perceived animate objects are built in the primate cortical region STS which is a region of convergent input from intermediate level form and motion representations. Populations of STS cells exist(More)
How do we manage to step into another person's shoes and eventually derive the intention behind observed behavior? We propose a connectionist neural network (NN) model that learns self-supervised a prerequisite of this social capability: it adapts its internal perspective in accordance to observed biological motion. The model first learns predictive(More)
—It appears that the mirror neuron system plays a crucial role when learning by imitation. However, it remains unclear how mirror neuron properties develop in the first place. A likely prerequisite for developing mirror neurons may be the capability to transform observed motion into a sufficiently self-centered frame of reference. We propose an artificial(More)