• Corpus ID: 247158374

Pix2NeRF: Unsupervised Conditional π-GAN for Single Image to Neural Radiance Fields Translation

@article{Cai2022Pix2NeRFUC,
  title={Pix2NeRF: Unsupervised Conditional $\pi$-GAN for Single Image to Neural Radiance Fields Translation},
  author={Shen Cai and Anton Obukhov and Dengxin Dai and Luc Van Gool},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.13162}
}
We propose a pipeline to generate Neural Radiance Fields (NeRF) of an object or a scene of a specific class, conditioned on a single input image. This is a challenging task, as training NeRF requires multiple views of the same scene, coupled with corresponding poses, which are hard to obtain. Our method is based on π -GAN, a generative model for unconditional 3D-aware image synthesis, which maps random latent codes to radiance fields of a class of objects. We jointly optimize (1) the π -GAN… 
Generative Adversarial Networks for Image Super-Resolution: A Survey
TLDR
This paper presents popular architectures for GANs in big and small samples for image applications, and analyzes motivations, implementations and differences of GAns based optimization methods and discriminative learning for image super-resolution in terms of supervised, semi-supervised and unsupervised manners.
NeRF, meet differential geometry!
TLDR
This work shows how a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions can be used to natively encourage the regularity of surfaces (by means of Gaussian and Mean Curvatures) making it possible, for example, to learn surfaces from a very limited number of views.

References

SHOWING 1-10 OF 50 REFERENCES
pixelNeRF: Neural Radiance Fields from One or Few Images
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
TLDR
This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
TLDR
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models.
Unsupervised Novel View Synthesis from a Single Image
TLDR
This work pre-train a purely generative decoder model using a GAN formulation while at the same time training an encoder network to invert the mapping from latent code to images and shows that the framework achieves results comparable to the state of the art on ShapeNet.
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis
TLDR
CIPS-3D is presented, a style-based, 3D-aware generator that is composed of a shallow NeRF network and a deep implicit neural representation (INR) network that synthesizes each pixel value independently without any spatial convolution or upsampling operation.
NeRF-: Neural Radiance Fields Without Known Camera Parameters
TLDR
It is shown that the camera parameters can be jointly optimised as learnable parameters with NeRF training, through a photometric reconstruction, and the joint optimisation pipeline can recover accurate camera parameters and achieve comparable novel view synthesis quality as those trained with COLMAP pre-computed camera parameters.
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes
TLDR
Stereo Radiance Fields is introduced, a neural view synthesis approach that is trained end-to-end, generalizes to new scenes, and requires only sparse views at test time, andExperiments show that SRF learns structure instead of over-fitting on a scene, achieving significantly sharper, more detailed results than scene-specific models.
GNeRF: GAN-based Neural Radiance Field without Posed Camera
TLDR
GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly initialized camera poses, is introduced and outperforms the baselines favorably in those scenes with repeated patterns or low textures that are regarded as extremely challenging before.
Efficient Geometry-aware 3D Generative Adversarial Networks
TLDR
This work introduces an expressive hybrid explicit-implicit network architecture that synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry.
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
TLDR
This work proposes a novel generative model, named Periodic Implicit Generative Adversarial Networks ($\pi$-GAN or pi-GAN), for high-quality 3D-aware image synthesis that leverages neural representations with periodic activation functions and volumetric rendering to represent scenes as view-consistent 3D representations with fine detail.
...
...