GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation
@article{Deng2021GRAMGR, title={GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation}, author={Yu Deng and Jiaolong Yang and Jianfeng Xiang and Xin Tong}, journal={ArXiv}, year={2021}, volume={abs/2112.08867} }
3D-aware image generative modeling aims to generate 3D-consistent images with explicitly controllable camera poses. Recent works have shown promising results by training neural radiance field (NeRF) generators on unstructured 2D images, but still can not generate highly-realistic images with fine details. A critical reason is that the high memory and computation cost of volumetric representation learning greatly restricts the number of point samples for radiance integration during training…
Figures and Tables from this paper
8 Citations
FENeRF: Face Editing in Neural Radiance Fields
- Computer ScienceArXiv
- 2021
This work proposes FENeRF, a 3D-aware generator that can produce view-consistent and locally-editable portrait images and reveals that joint learning semantics and texture helps to generate finer geometry.
VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting
- Computer Science
- 2022
VoLux-GAN, a generative framework to synthesize 3D-aware faces with convincing relighting, is proposed, a volumetric HDRI relighting method that can efficiently accumulate albedo, diffuse and specular lighting contributions along each 3D ray for any desired HDR environmental map.
StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation
- Computer ScienceArXiv
- 2021
This work introduces a high resolution, 3D-consistent image and shape generation technique which is trained on single-view RGB data only, and stands on the shoulders of StyleGAN2 for image generation, while solving two main challenges in3D-aware GANs.
3D GAN Inversion for Controllable Portrait Image Animation
- Computer ScienceArXiv
- 2022
This work proposes a supervision strategy to flexibly manipulate expressions with 3D morphable models, and shows that the proposed method also supports editing appearance attributes, such as age or hairstyle, by interpolating within the latent space of the GAN.
AE-NeRF: Auto-Encoding Neural Radiance Fields for 3D-Aware Object Manipulation
- Computer Science
- 2022
—We propose a novel framework for 3D-aware object manipulation, called Auto-Encoding Neural Radiance Fields (AE-NeRF). Our model, which is formulated in an auto-encoder architecture, extracts…
Advances in Neural Rendering
- Computer ScienceComputer Graphics Forum
- 2022
Synthesizing photo‐realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using…
Advances in neural rendering
- Biology, PsychologySIGGRAPH Courses
- 2021
Loss functions for Neural Rendering Jun-Yan Zhu shows the importance of knowing the number of neurons in the system and how many neurons are firing at the same time.
BACON: Band-limited Coordinate Networks for Multiscale Scene Representation
- Computer ScienceArXiv
- 2021
This work introduces band-limited coordinate networks (BACON), a network architecture with an analytical Fourier spectrum that has constrained behavior at unsupervised points, can be designed based on the spectral characteristics of the represented signal, and can represent signals at multiple scales without per-scale supervision.
References
SHOWING 1-10 OF 74 REFERENCES
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This work defines the new task of 3D controllable image synthesis and proposes an approach for solving it by reasoning both in 3D space and in the 2D image domain, and demonstrates that the model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images.
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models.
StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis
- Computer ScienceArXiv
- 2021
StyleNeRF is a 3D-aware generative model for photo-realistic high-resolution image synthesis with high multi-view consistency and enables control of camera poses and different levels of styles, which can generalize to unseen views and supports challenging tasks, including style mixing and semantic editing.
Visual Object Networks: Image Generation with Disentangled 3D Representations
- Computer Science, ArtNeurIPS
- 2018
A new generative model, Visual Object Networks (VONs), synthesizing natural images of objects with a disentangled 3D representation that enables many 3D operations such as changing the viewpoint of a generated image, shape and texture editing, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.
Unconstrained Scene Generation with Locally Conditioned Radiance Fields
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
Generative Scene Networks is introduced, which learns to decompose scenes into a collection of many local radiance fields that can be rendered from a free moving camera, and which produces quantitatively higher-quality scene renderings across several different scene datasets.
UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
This work shows that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering using the same model, and outperforms NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
GANcraft is presented, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft, and allows user control over both scene semantics and output style.
RenderNet: A deep convolutional network for differentiable rendering from 3D shapes
- Computer ScienceNeurIPS
- 2018
RenderNet is presented, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes with high performance and can be used in inverse rendering tasks to estimate shape, pose, lighting and texture from a single image.
Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.