Single-image Full-body Human Relighting

  title={Single-image Full-body Human Relighting},
  author={Manuel Lagunas and Xin Sun and Jimei Yang and Ruben Villegas and Jianming Zhang and Zhixin Shu and Bel{\'e}n Masi{\'a} and Diego Gutierrez},
  booktitle={Eurographics Symposium on Rendering},
We present a single-image data-driven method to automatically relight images with full-body humans in them. Our framework is based on a realistic scene decomposition leveraging precomputed radiance transfer (PRT) and spherical harmonics (SH) lighting. In contrast to previous work, we lift the assumptions on Lambertian materials and explicitly model diffuse and specular reflectance in our data. Moreover, we introduce an additional light-dependent residual term that accounts for errors in the PRT… 

Geometry-aware Single-image Full-body Human Relighting

A geometry-aware single-image human relighting framework that leverages single- image geometry reconstruction for joint deployment of traditional graphics rendering and neural rendering techniques and introduces a ray tracing-based per-pixel lighting representation that explicitly models high-frequency shadows.

Relighting Humans in the Wild: Monocular Full‐Body Human Relighting with Domain Adaptation

A two‐stage method for single‐image human relighting with domain adaptation is proposed, which can achieve higher generalization capability against various cloth textures, while reducing the domain gap.

Relighting4D: Neural Relightable Human from Videos

A principled framework that enables free-viewpoints relighting from only human videos under unknown illuminations, and shows that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields of normal, occlusion, diffuse, and specular maps.

RelightableHands: Efficient Neural Relighting of Articulated Hand Models

This work presents the first neural relighting approach for rendering high-fidelity personalized hands that can be animated in real-time under novel illumination, adopting a teacher-student framework, allowing us to synthesize hands in arbitrary illuminations but with heavy compute.

Learning Visibility Field for Detailed 3D Human Reconstruction and Relighting

A novel sparse-view 3d human reconstruction framework that closely incorporates the occupancy field and albedo field with an additional visibility field that surpasses state-of-the-art in terms of reconstruction accuracy while achieving comparably accurate relighting to ray-traced ground truth is proposed.

Learning to Relight Portrait Images via a Virtual Light Stage and Synthetic-to-Real Adaptation

This work proposes a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage, and develops a novel synthetic-to-real approach to bring photorealism to the relighting network output.

Structured 3D Features for Reconstructing Relightable and Animatable Avatars

This work presents a complete 3D transformer-based attention framework which, given a single image of a person in an un-constrained pose, generates an animatable 3D reconstruction with albedo and illumination decomposition, as a result of a single end-to-end model, trained semi-supervised, and with no additional postprocessing.

OutCast: Outdoor Single‐image Relighting with Cast Shadows

This work proposes a learned image space ray‐marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal to achieve state‐of‐the‐art relighting results.

RANA: Relightable Articulated Neural Avatars

This work proposes RANA, a relightable and articulated neural avatar for the photorealistic synthesis of humans under arbitrary viewpoints, body poses, and lighting, and presents a novel framework to model humans while disentangling their geometry, texture, and also lighting environment from monocular RGB videos.

Structured 3D Features for Reconstructing Controllable Avatars

This work presents a complete 3D transformer-based attention framework which, given a single image of a person in an unconstrained pose, generates an animatable 3D reconstruction with albedo and illumination decomposition, as a result of a single end-to-end model, trained semi-supervised, and with no additional postprocessing.

Learning Physics-Guided Face Relighting Under Directional Light

This work investigates end-to-end deep learning architectures that both de-light and relight an image of a human face, and decomposes the input image into intrinsic components according to a diffuse physics-based image formation model.

Single image portrait relighting via explicit multiple reflectance channel modeling

A novel framework that explicitly models multiple reflectance channels for single image portrait relighting, including the facial albedo, geometry as well as two lighting effects, i.e., specular and shadow is proposed.

Deep Single-Image Portrait Relighting

This work applies a physically-based portrait relighting method to generate a large scale, high quality, “in the wild” portraits relighting dataset (DPR), and a deep Convolutional Neural Network is trained using this dataset to generated a relit portrait image by using a source image and a target lighting as input.

Reflectance and Natural Illumination from Single-Material Specular Objects Using Deep Learning

A data-driven, learning-based approach trained on a very large dataset that estimates reflectance and illumination information from a single image depicting a single-material specular object from a given class under natural illumination is presented.

IBRNet: Learning Multi-View Image-Based Rendering

A method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views using a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations.

SfSNet: Learning Shape, Reflectance and Illuminance of Faces 'in the Wild'

SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation and is designed to reflect a physical lambertian rendering model.

Intrinsic Light Field Images

A new decomposition algorithm is proposed that jointly optimizes the whole light field data for proper angular coherence and provides 4D intrinsic decompositions difficult to achieve with previous state‐of‐the‐art algorithms.

NeRD: Neural Reflectance Decomposition from Image Collections

A neural reflectance decomposition (NeRD) technique that uses physically-based rendering to decompose the scene into spatially varying BRDF material properties enabling fast real-time rendering with novel illuminations.

Neural Light Transport for Relighting and View Synthesis

Qualitative and quantitative experiments demonstrate that the Neural Light Transport (NLT) outperforms state-of-the-art solutions for relighting and view synthesis, without requiring separate treatments for both problems that prior work requires.

Single image portrait relighting

A neural network is presented that takes as input a single RGB image of a portrait taken with a standard cellphone camera in an unconstrained environment, and from that image produces a relit image of that subject as though it were illuminated according to any provided environment map.