• Publications
  • Influence
Viewpoints and keypoints
TLDR
The problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details is characterized and it is demonstrated that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. Expand
View Synthesis by Appearance Flow
TLDR
This work addresses the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints and shows that for both objects and scenes, this approach is able to synthesize novel views of higher perceptual quality than previous CNN-based techniques. Expand
Learning Category-Specific Mesh Reconstruction from Image Collections
TLDR
A learning framework for recovering the 3D shape, camera, and texture of an object from a single image by incorporating texture inference as prediction of an image in a canonical appearance space and shows that semantic keypoints can be easily associated with the predicted shapes. Expand
Learning Shape Abstractions by Assembling Volumetric Primitives
TLDR
A learning framework for abstracting complex shapes by learning to assemble objects using 3D volumetric primitives that allows predicting shape representations which can be leveraged for obtaining a consistent parsing across the instances of a shape collection and constructing an interpretable shape similarity measure. Expand
Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency
TLDR
A differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view is proposed by reformulating view consistency using a differentiable ray consistency (DRC) term and it is shown that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations. Expand
Multi-view Supervision for Single-View Reconstruction via Differentiable Ray Consistency
TLDR
A differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view is proposed by reformulating view consistency using a differentiable ray consistency (DRC) term and it is shown that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations. Expand
Hierarchical Surface Prediction for 3D Object Reconstruction
TLDR
This work proposes a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids, and shows that high resolution predictions are more accurate than low resolution predictions. Expand
Category-specific object reconstruction from a single image
TLDR
An automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes is introduced, that can be driven by noisy automatic object segmentations and complement with a bottom-up module for recovering high-frequency shape details. Expand
Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction
TLDR
This work presents a framework for learning single-view shape and pose prediction without using direct supervision for either, and demonstrates the applicability of the framework in a realistic setting which is beyond the scope of existing techniques. Expand
Canonical Surface Mapping via Geometric Cycle Consistency
TLDR
This work explores the task of Canonical Surface Mapping and shows that the CSM task (pixel to 3D), when combined with 3D projection (3D to pixel), completes a cycle, thereby allowing forgo the dense manual supervision. Expand
...
1
2
3
4
5
...