• Publications
  • Influence
A scalable active framework for region annotation in 3D shape collections
TLDR
We propose a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations. Expand
  • 329
  • 65
  • PDF
BodyNet: Volumetric Inference of 3D Human Body Shapes
TLDR
BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision. Expand
  • 160
  • 27
  • PDF
Transformation-Grounded Image Generation Network for Novel 3D View Synthesis
TLDR
We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Expand
  • 169
  • 16
  • PDF
FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape From Single RGB Images
TLDR
We introduce the first large-scale, multi-view hand dataset that is accompanied by both 3D hand pose and shape annotations. Expand
  • 57
  • 15
  • PDF
DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction
TLDR
We present DISN, a Deep Implicit Surface Network which can generate a high-quality detail-rich 3D mesh from an 2D image by predicting the underlying signed distance fields. Expand
  • 89
  • 13
  • PDF
PlaneNet: Piece-Wise Planar Reconstruction from a Single RGB Image
TLDR
This paper proposes a deep neural network (DNN) for piece-wise planar depthmap reconstruction from a single RGB image. Expand
  • 70
  • 11
  • PDF
Dense Human Body Correspondences Using Convolutional Networks
TLDR
We use a deep convolutional neural network to train a feature descriptor on depth map pixels, but crucially, rather than training the network to solve the shape correspondence problem directly, we train it to solve a body region classification problem. Expand
  • 164
  • 10
  • PDF
Learning Local Shape Descriptors from Part Correspondences with Multiview Convolutional Networks
TLDR
We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondence, semantic segmentation, affordance prediction, and shape-to-scan matching. Expand
  • 77
  • 7
  • PDF
3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks
TLDR
The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Expand
  • 85
  • 6
  • PDF
Material Editing Using a Physically Based Rendering Network
TLDR
We present an end-to-end network architecture for image-based material editing that replicates the image formation process in a physically based rendering layer. Expand
  • 60
  • 6
  • PDF