Learn More
We present a novel single image deblurring method to estimate spatially non-uniform blur that results from camera shake. We use existing spatially invariant deconvolution methods in a local and robust way to compute initial estimates of the latent image. The camera motion is represented as a Motion Density Function (MDF) which records the fraction of time(More)
We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a(More)
We address the problem of super resolved generation of novel views of a 3D scene with the reference images obtained from cameras in general positions; a problem which has not been tackled before in the context of super resolution and is also of importance to the field of image based rendering. We formulate the problem as one of estimation of the color at(More)
We demonstrate a realtime system which infers and tracks the assembly process of a snap-together block model using a Kinect® sensor. The inference enables us to build a virtual replica of the model at every step. Tracking enables us to provide context specific visual feedback on a screen by augmenting the rendered virtual model aligned with the(More)
We present solutions for enhancing the spatial and/or temporal resolution of videos. Our algorithm targets the emerging consumer-level hybrid cameras that can simultaneously capture video and high-resolution stills. Our technique produces a high spacetime resolution video using the high-resolution stills for rendering and the low-resolution video to guide(More)
We present MotionMontage, a system for recording multiple motion takes of a rigid virtual object and compositing them together into a montage. Our system incorporates a Kinect-based performance capture setup that allows animators to create 3D animations by tracking the motion of a rigid physical object and mapping it in realtime onto a virtual object. The(More)
Interactive Playspaces for Object Assembly and Digital Storytelling Ankit Gupta Co-Chairs of the Supervisory Committee: Professor Brian Curless Computer Science and Engineering Dr. Michael Cohen Microsoft Research Today we observe a consistent shift towards doing our tasks virtually through machines. This mode of work ensures that the users are not tied by(More)
We propose a new developmental approach to goalbased imitation learning that allows a robot to: (1) learn probabilistic models of actions through self-discovery and experience, (2) utilize these learned models for inferring the goals of human demonstrations, and (3) perform goal-based imitation for humanrobot collaboration. Our approach is based on(More)
  • 1