Learn More
In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism of novel synthesized speech animations has been an important yet unsolved research problem. In this paper, we propose a novel statistical model (called SAQP) to automatically predict(More)
This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian(More)
Most of current facial animation editing techniques are frame-based approaches (i.e., manually edit one keyframe every several frames), which is ineffective, time-consuming, and prone to editing inconsistency. In this paper, we present a novel facial editing style learning framework that is able to learn a constraint-based Gaussian Process model from a(More)
Most of current facial animation approaches largely focus on the accuracy or efficiency of their algorithms, or how to optimally utilize pre-collected facial motion data. However, human perception, the ultimate measuring stick of the visual fidelity of synthetic facial animations, was not effectively exploited in these approaches. In this paper, we present(More)
is integrated into a system on chip (SoC) consisting of the core processor, the digital signal processor, and many peripheral controllers. This tight chip integration makes it infeasible to physically isolate and measure the SoC's graphics hardware. Moreover, because smartphone graphics hardware is less competent relative to desktop PCs, the smartphone's(More)
Lifelike interface agents (e.g. talking avatars) have been increasingly used in human-computer interaction applications. In this work, we quantitatively analyze how human perception is affected by audio-head motion characteristics of talking avatars. Specifically, we quantify the correlation between perceptual user ratings (obtained via user study) and(More)
In this paper, we experimentally study a new type of multiplayer mobile game for casual gamers by introducing the concept of online, team-based strategy forming via visual-only in-game communication. The game is deployed to 12 users, formed as three teams (four players per team), in order to study team-based cooperation and competition that they can develop(More)
  • 1