• Corpus ID: 247158374

# Pix2NeRF: Unsupervised Conditional π-GAN for Single Image to Neural Radiance Fields Translation

@article{Cai2022Pix2NeRFUC,
title={Pix2NeRF: Unsupervised Conditional $\pi$-GAN for Single Image to Neural Radiance Fields Translation},
author={Shen Cai and Anton Obukhov and Dengxin Dai and Luc Van Gool},
journal={ArXiv},
year={2022},
volume={abs/2202.13162}
}
• Published 26 February 2022
• Computer Science
• ArXiv
We propose a pipeline to generate Neural Radiance Fields (NeRF) of an object or a scene of a speciﬁc class, conditioned on a single input image. This is a challenging task, as training NeRF requires multiple views of the same scene, coupled with corresponding poses, which are hard to obtain. Our method is based on π -GAN, a generative model for unconditional 3D-aware image synthesis, which maps random latent codes to radiance ﬁelds of a class of objects. We jointly optimize (1) the π -GAN…
2 Citations

## Figures and Tables from this paper

Generative Adversarial Networks for Image Super-Resolution: A Survey
• Computer Science
• 2022
This paper presents popular architectures for GANs in big and small samples for image applications, and analyzes motivations, implementations and differences of GAns based optimization methods and discriminative learning for image super-resolution in terms of supervised, semi-supervised and unsupervised manners.
NeRF, meet differential geometry!
• Computer Science
• 2022
This work shows how a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions can be used to natively encourage the regularity of surfaces (by means of Gaussian and Mean Curvatures) making it possible, for example, to learn surfaces from a very limited number of views.

## References

SHOWING 1-10 OF 50 REFERENCES
pixelNeRF: Neural Radiance Fields from One or Few Images
• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
• Computer Science
NeurIPS
• 2020
This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.
Unsupervised Novel View Synthesis from a Single Image
• Computer Science
SSRN Electronic Journal
• 2021
This work pre-train a purely generative decoder model using a GAN formulation while at the same time training an encoder network to invert the mapping from latent code to images and shows that the framework achieves results comparable to the state of the art on ShapeNet.
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
• Computer Science
2019 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2019
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models.
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis
• Computer Science
ArXiv
• 2021
CIPS-3D is presented, a style-based, 3D-aware generator that is composed of a shallow NeRF network and a deep implicit neural representation (INR) network that synthesizes each pixel value independently without any spatial convolution or upsampling operation.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
• Computer Science
ECCV
• 2020
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
NeRF-: Neural Radiance Fields Without Known Camera Parameters
• Computer Science
ArXiv
• 2021
It is shown that the camera parameters can be jointly optimised as learnable parameters with NeRF training, through a photometric reconstruction, and the joint optimisation pipeline can recover accurate camera parameters and achieve comparable novel view synthesis quality as those trained with COLMAP pre-computed camera parameters.
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes
• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
Stereo Radiance Fields is introduced, a neural view synthesis approach that is trained end-to-end, generalizes to new scenes, and requires only sparse views at test time, andExperiments show that SRF learns structure instead of over-fitting on a scene, achieving significantly sharper, more detailed results than scene-specific models.
Efficient Geometry-aware 3D Generative Adversarial Networks
• Computer Science
ArXiv
• 2021
This work introduces an expressive hybrid explicit-implicit network architecture that synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry.
GNeRF: GAN-based Neural Radiance Field without Posed Camera
• Computer Science
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2021
GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field reconstruction for the complex scenarios with unknown and even randomly initialized camera poses, is introduced and outperforms the baselines favorably in those scenes with repeated patterns or low textures that are regarded as extremely challenging before.