Learn More
Detecting cooperative partners in situations that have financial stakes is crucial to successful social exchange. The authors tested whether humans are sensitive to subtle facial dynamics of counterparts when deciding whether to trust and cooperate. Participants played a 2-person trust game before which the facial dynamics of the other player were(More)
In this paper a technique is presented for learning audiovisual correlations in non-speech related articulations such as laughs, cries, sneezes and yawns, such that accurate new visual motions may be created given just audio. Our underlying model is data-driven and provides reliable performance given voices the system is familiar with as well as new voices.(More)
We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible(More)
—Although the human face is commonly used as a physiological biometric, very little work has been done to exploit the idiosyncrasies of facial motions for person identification. In this paper, we investigate the uniqueness and permanence of facial actions to determine whether these can be used as a behavioral biometric. Experiments are carried out using 3-D(More)
This paper introduces a method for reconstructing water from real video footage. Using a single input video, the proposed method produces a more informative reconstruction from a wider range of possible scenes than the current state of the art. The key is the combination of vision algorithms and physics laws. Shape from shading is used to capture the change(More)
We present a system capable of producing video-realistic videos of a speaker given audio only. The audio input signal requires no phonetic labelling and is speaker independent. The system requires only a small training set of video to achieve convincing realistic facial synthesis. The system learns the natural mouth and face dynamics of a speaker to allow(More)
We present an interactive, robust and high quality method for fast shadow removal. To perform detection we use an on-the-fly learning approach guided by two rough user inputs for the pixels of the shadow and the lit area. From this we derive a fusion image that magnifies shadow boundary intensity change due to illumination variation. After detection , we(More)
In this paper we investigate the problem of integrating the complementary audio and visual modalities for speech separation. Rather than using independence criteria suggested in most blind source separation (BSS) systems, we use the visual feature from a video signal as additional information to optimize the unmixing matrix. We achieve this by using a(More)