# PIE: Portrait Image Embedding for Semantic Control

@article{Tewari2020PIEPI,
title={PIE: Portrait Image Embedding for Semantic Control},
author={Ayush Tewari and Mohamed A. Elgharib and R. MallikarjunB. and Florian Bernard and Hans-Peter Seidel and Patrick P{\'e}rez and Michael Zollh{\"o}fer and Christian Theobalt},
journal={ArXiv},
year={2020},
volume={abs/2009.09485}
}
• Published 20 September 2020
• Computer Science
• ArXiv
Editing of portrait images is a very popular and important research topic with a large variety of applications. For ease of use, control should be provided via a semantically meaningful parameterization that is akin to computer animation controls. The vast majority of existing techniques do not provide such intuitive and fine-grained control, or only enable coarse editing of a single isolated control parameter. Very recently, high-quality semantically controlled editing has been demonstrated…

## Figures and Tables from this paper

PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering
• Computer Science
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2021
The proposed Portrait Image Neural Renderer can generate photo-realistic portrait images with accurate movements according to intuitive modifications and is extended to tackle the audio-driven facial reenactment task by extracting sequential motions from audio inputs.
3D GAN Inversion for Controllable Portrait Image Animation
• Computer Science
ArXiv
• 2022
This work proposes a supervision strategy to flexibly manipulate expressions with 3D morphable models, and shows that the proposed method also supports editing appearance attributes, such as age or hairstyle, by interpolating within the latent space of the GAN.
PIE: Portrait Image Embedding for Semantic Control –Supplemental Document–
Fig. 1. We present an approach for embedding portrait images in the latent space of StyleGAN [Karras et al. 2019] (visualized as “Projection“) which allows for intuitive photo-real semantic editing
PhotoApp
• Computer Science
ACM Transactions on Graphics
• 2021
An approach for high-quality intuitive editing of the camera viewpoint and scene illumination (parameterised with an environment map) in a portrait image and it is shown that the StyleGAN prior allows for generalisation to different expressions, hairstyles and backgrounds.
Pivotal Tuning for Latent-based editing of Real Images
• Computer Science
ArXiv
• 2021
Pivotal Tuning Inversion enables employing offthe-shelf latent-based semantic editing techniques on real images using StyleGAN, and demonstrates resilience to harder cases, including heavy make-up, elaborate hairstyles and/or headwear, which otherwise could not have been successfully inverted and edited by state-of-the-art methods.
FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold
• Computer Science
ArXiv
• 2021
Fig. 1. We introduce a new approach that generates an image with StyleGAN defined by a precise 3D camera. This enables faces synthesized with StyleGAN to be used in 3D free-viewpoint rendering, while
Pose with style
• Computer Science
ACM Trans. Graph.
• 2021
The StyleGAN generator is extended so that it takes pose as input and introduces a spatially varying modulation for the latent space using the warped local features (for controlling appearances) and compares favorably against the state-of-the-art algorithms.
RigNeRF: Fully Controllable Neural 3D Portraits
This work proposes RigNeRF, a system that goes beyond just novel view synthesis and enables full control of head pose and facial expressions learned from a single portrait video and demonstrates the effectiveness of the method on free view synthesis of a portrait scene with explicitHead pose and expression controls.
Neural Relighting and Expression Transfer On Video Portraits
• Computer Science
• 2021
A neural relighting and expression transfer technique to transfer the head pose and facial expressions from a source performer to a portrait video of a target performer while enabling dynamic relighting.
HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing
• Computer Science
ArXiv
• 2021
HyperStyle is proposed, a hypernetwork that learns to modulate StyleGAN’s weights to faithfully express a given image in editable regions of the latent space, and yields reconstructions comparable to those of optimization techniques with the near real-time inference capabilities of encoders.

## References

SHOWING 1-10 OF 54 REFERENCES
• Computer Science
ACM Trans. Graph.
• 2014
A technique to transfer the style of an example headshot photo onto a new one, which can allow one to easily reproduce the look of renowned artists, and can successfully handle styles by a variety of different artists.
Painting style transfer for head portraits using convolutional neural networks
• Art, Computer Science
ACM Trans. Graph.
• 2016
This work presents a new technique for transferring the painting from a head portrait onto another and imposes novel spatial constraints by locally transferring the color distributions of the example painting to better captures the painting texture and maintains the integrity of facial structures.
Warp-guided GANs for single-photo facial animation
• Computer Science
ACM Trans. Graph.
• 2018
This paper introduces a novel method for realtime portrait animation in a single photo that factorizes out the nonlinear geometric transformations exhibited in facial expressions by lightweight 2D warps and leaves the appearance detail synthesis to conditional generative neural networks for high-fidelity facial animation generation.
StyleRig: Rigging StyleGAN for 3D Control Over Portrait Images
• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
This work presents the first method to provide a face rig-like control over a pretrained and fixed StyleGAN via a 3DMM via a new rigging network, \textit{RigNet} is trained between the 3D MM's semantic parameters and StyleGAN's input.
Image2StyleGAN++: How to Edit the Embedded Images?
• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
A framework that combines embedding with activation tensor manipulation to perform high quality local edits along with global semantic edits on images and can restore high frequency features in images and thus significantly improves the quality of reconstructed images.
Portrait lighting transfer using a mass transport approach
• Computer Science
TOGS
• 2017
This work simplifies this task using a relighting technique that transfers the desired illumination of one portrait onto another using a 3D morphable face model and solves a mass-transport problem in this augmented space to generate a color remapping that achieves localized, geometry-aware relighting.
Single image portrait relighting
• Computer Science
ACM Trans. Graph.
• 2019
A neural network is presented that takes as input a single RGB image of a portrait taken with a standard cellphone camera in an unconstrained environment, and from that image produces a relit image of that subject as though it were illuminated according to any provided environment map.
Image Style Transfer Using Convolutional Neural Networks
• Computer Science, Art
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2016
A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.
Deep video portraits
• Computer Science
ACM Trans. Graph.
• 2018
The first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor using only an input video is presented.
StyleFlow: Attribute-conditioned Exploration of StyleGAN-Generated Images using Conditional Continuous Normalizing Flows
• Computer Science
ACM Trans. Graph.
• 2021
This article presents StyleFlow as a simple, effective, and robust solution to both the sub-problems of attribute-conditioned sampling and attribute-controlled editing by formulating conditional exploration as an instance of conditional continuous normalizing flows in the GAN latent space conditioned by attribute features.