Vincent Sitzmann

Learn More
Figure 1: The performance of deep networks trained for high-level computer vision tasks such as classification degrades under noise, blur, and other imperfections present in raw sensor data. (Left) An image of jelly beans corrupted by noise characteristic of low-light conditions is misclassified as a library by the Inception-v4 classification network.(More)
This article describes the sampling design, survey methodology and findings of a natural resources survey conducted in the Rathbun Lake Watershed in Southern Iowa in 1999-2000. The goal of the survey was to quantify the erosion from all sources on agricultural lands and the ecological health of streams for each of 61 subwatersheds in the area. A total of(More)
A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model. Traditionally, hand-crafted priors along with iterative optimization methods have been used(More)
Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in visual content across cuts, viewers in general experience no trouble perceiving the discontinuous flow of information as a coherent set of events. However,(More)
Understanding how humans explore virtual environments is crucial for many applications, such as developing compression algorithms or designing effective cine-matic virtual reality (VR) content, as well as to develop predictive computational models. We have recorded 780 head and gaze trajectories from 86 users exploring omni-directional stereo panoramas(More)
  • 1