Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks

@article{Wang2017ShapeIU,
  title={Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks},
  author={Weiyue Wang and Qiangui Huang and Suya You and Chao Yang and Ulrich Neumann},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={2317-2325}
}
Recent advances in convolutional neural networks have shown promising results in 3D shape completion. [] Key Method The 3DED- GAN is a 3D convolutional neural network trained with a generative adversarial paradigm to fill missing 3D data in low-resolution. LRCN adopts a recurrent neural network architecture to minimize GPU memory usage and incorporates an Encoder-Decoder pair into a Long Shortterm Memory Network.

Figures and Tables from this paper

3D Model Inpainting Based on 3D Deep Convolutional Generative Adversarial Network
TLDR
The experimental results show that the proposed 3D mesh model repair method based on the 3D Deep Convolutional Generative Adversarial Network (3D-DCGAN) can effectively generate the repairing area while retaining the details of the area and blend it with the model to be repaired.
High-Quality Textured 3D Shape Reconstruction with Cascaded Fully Convolutional Networks
TLDR
Qualitative and quantitative experimental results on both synthetic and real-world datasets demonstrate that the presented approach outperforms existing state-of-the-art work regarding visual quality and accuracy of reconstructed models.
A Spatial Relationship Preserving Adversarial Network for 3D Reconstruction from a Single Depth View
TLDR
Experimental results show that SRPAN not only outperforms several state-of-the-art methods by a large margin on both synthetic datasets and real-world datasets, but also reconstructs unseen object categories with a higher accuracy.
Visual Object Networks: Image Generation with Disentangled 3D Representations
TLDR
A new generative model, Visual Object Networks (VONs), synthesizing natural images of objects with a disentangled 3D representation that enables many 3D operations such as changing the viewpoint of a generated image, shape and texture editing, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.
3DFaceGAN: Adversarial Nets for 3D Face Representation, Generation, and Translation
TLDR
3DFaceGAN is presented, the first GAN tailored towards modeling the distribution of 3D facial surfaces, while retaining the high frequency details of3D face shapes.
Learning to Reconstruct High-Quality 3D Shapes with Cascaded Fully Convolutional Networks
TLDR
A novel cascaded 3D convolutional network architecture is introduced, which learns to reconstruct implicit surface representations from noisy and incomplete depth maps in a progressive, coarse-to-fine manner.
Trilateral convolutional neural network for 3D shape reconstruction of objects from a single depth view
TLDR
The proposed Tri-CNN combines three dilated convolutions in 3D to expand the convolutional receptive field more efficiently to learn shape reconstructions and produces superior reconstruction results in terms of intersection over union values and Brier scores with significantly less number of model parameters and memory.
3D Object Dense Reconstruction from a Single Depth View
TLDR
The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space.
SlabGAN: a method for generating efficient 3D anisotropic medical volumes using generative adversarial networks
TLDR
The SlabGAN uses the progressive GAN architecture extended to 3D, but removes the requirement of the three dimensions being equal sizes, which permits the generation of anisotropic 3D volumes with large x and y dimensions.
...
...

References

SHOWING 1-10 OF 31 REFERENCES
Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis
TLDR
A data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis and a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers.
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
TLDR
A novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets, and a powerful 3D shape descriptor which has wide applications in 3D object recognition.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis
TLDR
This work proposes a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network.
Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision
TLDR
An encoder-decoder network with a novel projection loss defined by the projective transformation enables the unsupervised learning using 2D observation without explicit 3D supervision and shows superior performance and better generalization ability for 3D object reconstruction when the projection loss is involved.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
3D ShapeNets: A deep representation for volumetric shapes
TLDR
This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
TLDR
The 3D-R2N2 reconstruction framework outperforms the state-of-the-art methods for single view reconstruction, and enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).
Multi-view Convolutional Neural Networks for 3D Shape Recognition
TLDR
This work presents a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and shows that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art3D shape descriptors.
...
...