FaceShop: Deep Sketch-based Face Image Editing

@article{Portenier2018FaceShopDS,
  title={FaceShop: Deep Sketch-based Face Image Editing},
  author={Tiziano Portenier and Qiyang Hu and Attila Szab{\'o} and Siavash Arjomand Bigdeli and Paolo Favaro and Matthias Zwicker},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.08972}
}
We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. [] Key Method Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation…

Deep Generation of Face Images from Sketches

This work uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches, and is easy to use even for non-artists, while still supporting fine-grained control of shape details.

SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches

This work investigates a new paradigm of sketch-based image manipulation: mask-free local image manipulation, which only requires sketch inputs from users and utilizes the entire original image.

DeepFacePencil: Creating Face Images from Freehand Sketches

DeepFacePencil is proposed, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches, based on a novel dual generator image translation network during training, designed to adaptively handle stroke distortions which are spatially varying to support various stroke styles and different level of details.

S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation

This paper proposes a sketch-to-image generation framework called S2FGAN, aiming to improve users’ ability to interpret and flexibility of face attribute editing from a simple sketch, and dedicates the theoretic analysis of attribute editing to build attribute mapping networks with latent semantic loss to modify latent space semantics of Generative Adversarial Networks (GANs).

An Unpaired Sketch-to-Photo Translation Model

This work shows that the key to this task lies in decomposing the translation into two subtasks, shape translation and colorization, and proposes a model consisting of two sub-networks, with each one tackling one sub-task.

Class-conditioned Outline-to-Image Translation Interactive Sketch & Fill

An interactive GAN-based sketch-to-image translation method that helps novice users easily create images of simple objects and introduces a gating-based approach for class conditioning, which allows for distinct classes without feature mixing, from a single generator network.

DeFLOCNet: Deep Image Editing via Flexible Low-level Controls

  • Hongyu LiuZiyu Wan Wei Liu
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
This paper proposes DeFLOCNet which relies on a deep encoder-decoder CNN to retain the guidance of low-level controls in the deep feature representations of the CNN feature space and effectively transforms different user intentions to create visually pleasing content.

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing, a structured disentanglement framework specifically designed for face images to support face generation and edit- ing with disentangled control of geometry and appearance, adopted a local-to-global approach to incorporate the face domain knowledge.

Fashion Editing with Multi-scale Attention Normalization

A novel Fashion Editing Generative Adversarial Network (FE-GAN), which is capable of manipulating fashion images by free-form sketches and sparse color strokes and significantly outperforms the state-of-the-art methods on image manipulation.

SC-FEGAN: Face Editing Generative Adversarial Network With User’s Sketch and Color

This work trained the network with an additional style loss, which made it possible to generate realistic results despite large portions of the image being removed, and is well suited for generating high-quality synthetic images using intuitive user inputs.
...

References

SHOWING 1-10 OF 39 REFERENCES

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.

Visual attribute transfer through deep image analogy

The technique finds semantically-meaningful dense correspondences between two input images by adapting the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching, and is called deep image analogy.

Real-time user-guided image colorization with learned deep priors

We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization with a

Automatic Photo Adjustment Using Deep Neural Networks

This article forms automatic photo adjustment in a manner suitable for deep neural networks, and introduces an image descriptor accounting for the local semantics of an image that can model local adjustments that depend on image semantics.

Painting style transfer for head portraits using convolutional neural networks

This work presents a new technique for transferring the painting from a head portrait onto another and imposes novel spatial constraints by locally transferring the color distributions of the example painting to better captures the painting texture and maintains the integrity of facial structures.

A Neural Algorithm of Artistic Style

This work introduces an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality and offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.

Fragment-based image completion

A new method for completing missing parts caused by the removal of foreground or background elements from an image, iteratively approximating the unknown regions and composites adaptive image fragments into the image to synthesize a complete, visually plausible and coherent image.

Deep bilateral learning for real-time image enhancement

This work introduces a new neural network architecture inspired by bilateral grid processing and local affine color transforms that processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of theart approximation techniques on a large class of image operators.

Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification

A novel technique to automatically colorize grayscale images that combines both global priors and local image features and can process images of any resolution, unlike most existing approaches based on CNN.

Photographic Image Synthesis with Cascaded Refinement Networks

  • Qifeng ChenV. Koltun
  • Computer Science
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
It is shown that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective.