Deep Factorised Inverse-Sketching

@article{Pang2018DeepFI,
  title={Deep Factorised Inverse-Sketching},
  author={Kaiyue Pang and Da Li and Jifei Song and Yi-Zhe Song and Tao Xiang and Timothy M. Hospedales},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.02313}
}
Modelling human free-hand sketches has become topical recently, driven by practical applications such as fine-grained sketch based image retrieval (FG-SBIR). Sketches are clearly related to photo edge-maps, but a human free-hand sketch of a photo is not simply a clean rendering of that photo’s edge map. Instead there is a fundamental process of abstraction and iconic rendering, where overall geometry is warped and salient details are selectively included. In this paper we study this sketching… 
Unsupervised Sketch to Photo Synthesis
TLDR
Unsupervised sketch-to-photo synthesis is sketch-faithful and photo-realistic to enable sketch-based image retrieval in practice and includes a self-supervised denoising objective and an attention module to handle abstraction and style variations that are inherent and specific to sketches.
SketchMan: Learning to Create Professional Sketches
TLDR
This work proposes a new challenging task sketch enhancement (SE) defined in an ill-posed space, i.e. enhancing a non-professional sketch (NPS) to a professional sketch (PS), which is a creative generation task different from sketch abstraction, sketch completion and sketch variation.
An Unpaired Sketch-to-Photo Translation Model
TLDR
This work shows that the key to this task lies in decomposing the translation into two subtasks, shape translation and colorization, and proposes a model consisting of two sub-networks, with each one tackling one sub-task.
Deep Learning for Free-Hand Sketch: A Survey
  • Peng Xu
  • Computer Science
    IEEE transactions on pattern analysis and machine intelligence
  • 2022
TLDR
A comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable.
AutoLink: Self-supervised Learning of Human Skeletons and Object Outlines by Linking Keypoints
TLDR
Although simpler, AutoLink outperforms existing self-supervised methods on the established keypoint and pose estimation benchmarks and paves the way for structure-conditioned generative models on more diverse datasets.
Disentangled and controllable sketch creation based on disentangling the structure and color enhancement
TLDR
This poster presents a probabilistic procedure to estimate the intensity of the response of the human eye to sound waves and its applications in media convergence and communication.
SketchMan

References

SHOWING 1-10 OF 50 REFERENCES
Sketch Me That Shoe
TLDR
A deep tripletranking model for instance-level SBIR is developed with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data.
Learning Deep Sketch Abstraction
TLDR
This work proposes the first stroke-level sketch abstraction model based on the insight of sketch abstraction as a process of trading off between the recognizability of a sketch and the number of strokes used to draw it and shows that the model can be used for various sketch analysis tasks.
Fine-Grained Sketch-Based Image Retrieval by Matching Deformable Part Models
  • Li
  • Computer Science
  • 2014
TLDR
This paper learns deformable part-based model (DPM) as a mid-level representation to discover and encode the various poses in sketch and image domains independently, after which graph matching is performed on DPMs to establish pose correspondences across the two domains.
Learning to Sketch with Shortcut Cycle Consistency
TLDR
A novel approach for translating an object photo to a sketch, mimicking the human sketching process, and shows that the synthetic sketches can be used to train a better fine-grained sketch-based image retrieval model, effectively alleviating the problem of sketch data scarcity.
Cross-domain Generative Learning for Fine-Grained Sketch-Based Image Retrieval
TLDR
A novel discriminative-generative hybrid model is proposed by introducing a generative task of cross-domain image synthesis that enforces the learned embedding space to preserve all the domain invariant information that is useful for cross- domain reconstruction, thus explicitly reducing the domain gap as opposed to existing models.
Sketch-a-Net: A Deep Neural Network that Beats Humans
TLDR
It is shown that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches.
Deep Sketch Hashing: Fast Free-Hand Sketch-Based Image Retrieval
TLDR
This paper introduces a novel binary coding method, named Deep Sketch Hashing (DSH), where a semi-heterogeneous deep architecture is proposed and incorporated into an end-to-end binary coding framework, and is the first hashing work specifically designed for category-level SBIR with an end to end deep architecture.
The sketchy database
TLDR
The Sketchy database is presented, the first large-scale collection of sketch-photo pairs and it is shown that the learned representation significantly outperforms both hand-crafted features as well as deep features trained for sketch or photo classification.
Free-Hand Sketch Synthesis with Deformable Stroke Models
TLDR
A generative model which can automatically summarize the stroke composition of free-hand sketches of a given category is presented, which represents both consistent as well as diverse aspects of each sketch category.
Sketch-a-Classifier: Sketch-Based Photo Classifier Generation
TLDR
This paper investigates an alternative approach of synthesizing image classifiers: almost directly from a user's imagination, via freehand sketch, which can also be used to enhance the granularity of an existing photo classifier, or as a complement to name-based zero-shot learning.
...
...