• Corpus ID: 247158374

Pix2NeRF: Unsupervised Conditional π-GAN for Single Image to Neural Radiance Fields Translation

  title={Pix2NeRF: Unsupervised Conditional $\pi$-GAN for Single Image to Neural Radiance Fields Translation},
  author={Shen Cai and Anton Obukhov and Dengxin Dai and Luc Van Gool},
We propose a pipeline to generate Neural Radiance Fields (NeRF) of an object or a scene of a specific class, conditioned on a single input image. This is a challenging task, as training NeRF requires multiple views of the same scene, coupled with corresponding poses, which are hard to obtain. Our method is based on π -GAN, a generative model for unconditional 3D-aware image synthesis, which maps random latent codes to radiance fields of a class of objects. We jointly optimize (1) the π -GAN… 
Generative Adversarial Networks for Image Super-Resolution: A Survey
This paper presents popular architectures for GANs in big and small samples for image applications, and analyzes motivations, implementations and differences of GAns based optimization methods and discriminative learning for image super-resolution in terms of supervised, semi-supervised and unsupervised manners.
NeRF, meet differential geometry!
This work shows how a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions can be used to natively encourage the regularity of surfaces (by means of Gaussian and Mean Curvatures) making it possible, for example, to learn surfaces from a very limited number of views.


pixelNeRF: Neural Radiance Fields from One or Few Images
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.
Unsupervised Novel View Synthesis from a Single Image
This work pre-train a purely generative decoder model using a GAN formulation while at the same time training an encoder network to invert the mapping from latent code to images and shows that the framework achieves results comparable to the state of the art on ShapeNet.
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models.
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis
CIPS-3D is presented, a style-based, 3D-aware generator that is composed of a shallow NeRF network and a deep implicit neural representation (INR) network that synthesizes each pixel value independently without any spatial convolution or upsampling operation.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
NeRF-: Neural Radiance Fields Without Known Camera Parameters
It is shown that the camera parameters can be jointly optimised as learnable parameters with NeRF training, through a photometric reconstruction, and the joint optimisation pipeline can recover accurate camera parameters and achieve comparable novel view synthesis quality as those trained with COLMAP pre-computed camera parameters.
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes
Stereo Radiance Fields is introduced, a neural view synthesis approach that is trained end-to-end, generalizes to new scenes, and requires only sparse views at test time, andExperiments show that SRF learns structure instead of over-fitting on a scene, achieving significantly sharper, more detailed results than scene-specific models.
Efficient Geometry-aware 3D Generative Adversarial Networks
This work introduces an expressive hybrid explicit-implicit network architecture that synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry.
GNeRF: GAN-based Neural Radiance Field without Posed Camera
GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly initialized camera poses, is introduced and outperforms the baselines favorably in those scenes with repeated patterns or low textures that are regarded as extremely challenging before.