LADN: Local Adversarial Disentangling Network for Facial Makeup and De-Makeup

@article{Gu2019LADNLA,
  title={LADN: Local Adversarial Disentangling Network for Facial Makeup and De-Makeup},
  author={Qiao Gu and Guanzhi Wang and Mang Tik Chiu and Yu-Wing Tai and Chi-Keung Tang},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={10480-10489}
}
We propose a local adversarial disentangling network (LADN) for facial makeup and de-makeup. [] Key Method Existing techniques do not demonstrate or fail to transfer high-frequency details in a global adversarial setting, or train a single local discriminator only to ensure image structure consistency and thus work only for relatively simple styles.

Figures and Tables from this paper

SLGAN: Style- and Latent-guided Generative Adversarial Network for Desirable Makeup Transfer and Removal

This paper provides a novel, perceptual makeup loss and a style-invariant decoder that can transfer makeup styles based on histogram matching to avoid the identity-shift problem and shows that the SLGAN is better than or comparable to state-of-the-art methods.

Local Facial Makeup Transfer via Disentangled Representation

This paper proposes a novel unified adversarial disentangling network to further decompose face images into four independent components, i.e., personal identity, lips makeup style, eyes makeup style and face makeup style.

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

A novel face protection method aiming at constructing adversarial face images that preserve stronger black-box transferability and better visual quality simultaneously, and introduces a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer.

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

A unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under the black-box setting, and implements a fine-grained meta-learning based adversarial attack strategy to learn more vulnerable or sensitive features from various models.

FaceController: Controllable Attribute Editing for Face in the Wild

This work proposes a simple feed-forward network to generate high-fidelity manipulated faces with one or multiple desired face attributes manipulated while other details are preserved by simply employing some existing and easy-obtainable prior information.

Few-Shot Model Adaptation for Customized Facial Landmark Detection, Segmentation, Stylization and Shadow Removal

The FSMA framework is prominent in its versatility across a wide range of facial image applications and achieves state-of-the-art few-shot landmark detection performance and it offers satisfying solutions for few- shot face segmentation, stylization and facial shadow removal tasks for the first time.

EleGANt: Exquisite and Locally Editable GAN for Makeup Transfer

Exquisite and locally editable GAN is proposed for makeup transfer (El-eGANt), which encodes facial attributes into pyramidal feature maps to pre-serves high-frequency information and introduces a novel Sow-Attention Module that applies attention within shifted overlapped windows to reduce the computational cost.

FM2u-Net: Face Morphological Multi-Branch Network for Makeup-Invariant Face Verification

A unified Face Morphological Multi-branch Network (FMMu-Net) is proposed for makeup-invariant face verification, which can simultaneously synthesize many diverse makeup faces through face morphology network (FM-Net), and effectively learn cosmetics-robust face representations using attention-based multi-br branch learning network (AttM-Net).

A comprehensive survey on semantic facial attribute editing using generative adversarial networks

This paper surveys the recent works and advances in semantic facial attribute editing and covers all related aspects of these models including the related definitions and concepts, architectures, loss functions, datasets, evaluation metrics, and applications.

Detailed Region-Adaptive Normalization for Heavy Makeup Transfer

A novel GAN model to handle heavy makeup transfer, while maintaining the robustness to different poses and expressions is proposed, achieving state-of-the-art results both on light and heavy makeup styles, and is robust to different pose and expressions.

References

SHOWING 1-10 OF 30 REFERENCES

BeautyGAN: Instance-level Facial Makeup Transfer with Deep Generative Adversarial Network

A dual input/output Generative Adversarial Network that enables the network to learn translation on instance-level through unsupervised adversarial learning, and could generate visually pleasant makeup faces and accurate transferring results.

PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup

This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo using a new framework of cycle-consistent generative adversarial networks.

Generative Face Completion

This paper demonstrates qualitatively and quantitatively that the proposed effective face completion algorithm is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Makeup Like a Superstar: Deep Localized Makeup Transfer Network

A novel Deep Localized Makeup Transfer Network to automatically recommend the most suitable makeup for a female and synthesis the makeup on her face and performs much better than the methods of Guo and Sim, 2009 and two variants of NerualStyle.

Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).

Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

Markovian Generative Adversarial Networks (MGANs) are proposed, a method for training generative networks for efficient texture synthesis that surpasses previous neural texture synthesizers by a significant margin and applies to texture synthesis, style transfer, and video stylization.

Face Behind Makeup

A locality-constrained coupled dictionary learning (LC-CDL) framework is proposed to synthesize non-makeup face, so that the makeup could be erased according to the style, and shows very promising performance on makeup removal regarding on the structure similarity.

Diverse Image-to-Image Translation via Disentangled Representations

This work presents an approach based on disentangled representation for producing diverse outputs without paired training images, and proposes to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and adomain-specific attribute space.

Globally and locally consistent image completion

We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary