Learn More
This paper presents an occupancy based generative model of stereo and multi-view stereo images. In this model, the space is divided into empty and occupied regions. The depth of a pixel is naturally determined from the occupancy as the depth of the first occupied point in its viewing ray. The color of a pixel corresponds to the color of this 3D point. This(More)
In this paper, we address the problem of synthesizing novel views from a set of input images. State of the art methods, such as the Unstructured Lumigraph, have been using heuristics to combine information from the original views, often using an explicit or implicit approximation of the scene geometry. While the proposed heuristics have been largely(More)
Live-action stereoscopic content production requires a stereo rig with two cameras precisely matched and aligned. While most deviations from this perfect setup can be corrected either live or in post-production, a difference in the focus distance or focus range between the two cameras will lead to unrecoverable degradations of the stereoscopic footage. In(More)
Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear(More)
We address the topic of novel view synthesis from a stereoscopic pair of images. The techniques have mainly 3 stages: the reconstruction of correspondences between the views, the estimation of the blending factor of each view for the final view, and the rendering. The state of the art has mainly focused on the correspondence topic, but little work addresses(More)
La production d'images stéréoscopiques nécessite un rig stéréoscopique avec deux caméras parfaitement synchro-nisées et alignées. La plupart des imprécisions de ce montage peuvent être corrigées en direct ou en post-production. Par contre, une différence de distance de mise au point ou de profondeur de champ entre les caméras produira des dégradations(More)
Figure 1: Given static 3D scans or 3D scan sequences (in pink), we estimate the naked shape under clothing (beige). Our method obtains accurate results by minimizing an objective function that captures the visible details of the skin, while being robust to clothing. We show several pairs of clothed scan sequences and the estimated body shape underneath.(More)