Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis

@article{Xiang2022AdversarialOD,
  title={Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis},
  author={Xiaoyu Xiang and Ding Liu and Xiao Yang and Yiheng Zhu and Xiaohui Shen and Jan P. Allebach},
  journal={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  year={2022},
  pages={944-954}
}
  • Xiaoyu Xiang, Ding Liu, J. Allebach
  • Published 12 April 2021
  • Computer Science
  • 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
In this paper, we explore open-domain sketch-to-photo translation, which aims to synthesize a realistic photo from a freehand sketch with its class label, even if the sketches of that class are missing in the training data. It is challenging due to the lack of training supervision and the large geometric distortion between the freehand sketch and photo domains. To synthesize the absent freehand sketches from photos, we propose a framework that jointly learns sketch-to-photo and photo-to-sketch… 

Style-Content Disentanglement in Language-Image Pretraining Representations for Zero-Shot Sketch-to-Image Synthesis

In this work, we propose and validate a framework to leverage language-image pretraining representations for training-free zero-shot sketch-to-image synthesis. We show that disentangled content and

Adversarial Open Domain Adaption Framework (AODA): Sketch-to-Photo Synthesis

  • Amey ThakurM. Satish
  • Computer Science
    International Journal of Engineering Applied Sciences and Technology
  • 2021
The efficiency of the Adversarial Open Domain Adaption framework for sketch-to-photo synthesis is demonstrated and a simple but effective open-domain sampling and optimization method that “tricks” the generator into considering false drawings as genuine is offered.

Unsupervised Scene Sketch to Photo Synthesis

Without the need for sketch and photo pairs, the proposed framework directly learns from readily available large-scale photo datasets in an unsupervised manner and facilitates a controllable manipulation of photo synthesis by editing strokes of corresponding sketches, delivering more fine-grained details than previous approaches that rely on region-level editing.

Paint2Pix: Interactive Painting based Progressive Image Synthesis and Editing

This paper proposes a novel approach paint2pix, which learns to predict (and adapt) “what a user wants to draw” from rudimentary brushstroke inputs, by learning a mapping from the manifold of incomplete human paintings to their realistic renderings.

Intuitively Searching for the Rare Colors from Digital Artwork Collections by Text Description: A Case Demonstration of Japanese Ukiyo-e Print Retrieval

A cross-modal multi-task fine-tuning method based on CLIP (Contrastive Language-Image Pre-Training), which uses the human sensory characteristics of colors contained in the language space and the geometric characteristics of the sketches of a given artwork in order to gain better representations of that artwork piece.

Generative Adversarial Networks

  • Amey Thakur
  • Computer Science
    International Journal for Research in Applied Science and Engineering Technology
  • 2021
The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as and the various GAN variants along with their respective architectures.

References

SHOWING 1-10 OF 81 REFERENCES

SketchyCOCO: Image Generation From Freehand Scene Sketches

This work introduces the first method for automatic image generation from scene-level freehand sketches that allows for controllable image generation by specifying the synthesis goal via free hand sketches and builds a large-scale composite dataset called SketchyCOCO to support and evaluate the solution.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.

Unsupervised sketchto-photo synthesis

  • arXiv preprint arXiv:1909.08313,
  • 2019

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to two

Interactive Sketch & Fill: Multiclass Sketch-to-Image Translation

An interactive GAN-based sketch-to-image translation method that helps novice users easily create images of simple objects and introduces a gating-based approach for class conditioning, which allows for distinct classes without feature mixing, from a single generator network.

Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval

A novel deep FG-SBIR model is proposed which differs significantly from the existing models in that it is spatially aware, achieved by introducing an attention module that is sensitive to the spatial position of visual details and combines coarse and fine semantic information via a shortcut connection fusion block.

A Neural Representation of Sketch Drawings

We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on thousands of crude human-drawn images representing hundreds

Sketch Me That Shoe

A deep tripletranking model for instance-level SBIR is developed with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting

This paper proposes two novel cross-modal translation pre-text tasks for self-supervised feature learning: Vectorization and Rasterization and shows that the learned encoder modules benefit both raster-based and vector-based downstream approaches to analysing hand-drawn data.

SceneSketcher: Fine-Grained Image Retrieval with Scene Sketches

This paper proposes a graph embedding based method to learn the similarity measurement between images and scene sketches, which models the multi-modal information, including the size and appearance of objects as well as their layout information, in an effective manner.
...