LiveSketch: Query Perturbations for Guided Sketch-Based Visual Search

@article{Collomosse2019LiveSketchQP,
  title={LiveSketch: Query Perturbations for Guided Sketch-Based Visual Search},
  author={John P. Collomosse and Tu Bui and Hailin Jin},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019},
  pages={2874-2882}
}
  • J. Collomosse, Tu Bui, Hailin Jin
  • Published 14 April 2019
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
LiveSketch is a novel algorithm for searching large image collections using hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch search by creating visual suggestions that augment the query as it is drawn, making query specification an iterative rather than one-shot process that helps disambiguate users' search intent. Our technical contributions are: a triplet convnet architecture that incorporates an RNN based variational autoencoder to search for images using vector… 

Figures and Tables from this paper

Edinburgh Research Explorer Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image Retrieval

TLDR
This paper devise a reinforcement learning based cross-modal retrieval framework that directly optimizes rank of the ground-truth photo over a complete sketch drawing episode, and introduces a novel reward scheme that circumvents the problems related to irrelevant sketch strokes, and thus provides us with a more consistent rank list during the retrieval.

Sketch Less for More: On-the-Fly Fine-Grained Sketch-Based Image Retrieval

TLDR
A reinforcement learning based cross-modal retrieval framework that directly optimizes rank of the ground-truth photo over a complete sketch drawing episode and introduces a novel reward scheme that circumvents the problems related to irrelevant sketch strokes, and thus provides us with a more consistent rank list during the retrieval.

Sketching without Worrying: Noise-Tolerant Sketch-Based Image Retrieval

TLDR
A stroke subset selector that detects noisy strokes, leaving only those which make a positive contribution towards successful retrieval, and can be used in a plug-and-play manner to empower various sketch applications in ways that were not previously possible.

Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval

TLDR
This work proposes a novel framework that leverages a uniquely designed deep reinforcement learning model that performs a dual-level exploration to deal with partial sketch training and attention region selection and achieves the state-of-the-art performance on partial sketch based image retrieval.

Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image Retrieval

TLDR
A novel network is designed that is capable of cultivating sketch-specific hierarchies and exploiting them to match sketch with photo at corresponding hierarchical levels, and enriched using cross-modal co-attention and hierarchical node fusion at every level to form a better embedding space to conduct retrieval.

A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch

TLDR
It is empirically demonstrate that using an input sketch (even a poorly drawn one) in addition to text considerably increases retrieval recall compared to traditional text-based image retrieval.

Sketch3T: Test-Time Training for Zero-Shot SBIR

TLDR
This paper extends ZS-SBIR to include a test-time training paradigm that can adapt using just one sketch, and designs a novel meta-learning based training paradigm to learn a separation between model updates incurred by this auxiliary task from those off the primary objective of discriminative learning.

Adaptive Fine-Grained Sketch-Based Image Retrieval

TLDR
A novel model-agnostic meta-learning (MAML) based framework with several key modifications is introduced, to solve the problem of how a trained FG-SBIR model is applied to both new categories and different human sketchers, i.e., different drawing styles.

Sketchformer: Transformer-Based Representation for Sketched Structure

TLDR
It is shown that sketch reconstruction and interpolation are improved significantly by the Sketchformer embedding for complex sketches with longer stroke sequences, when compared against baseline representations driven by LSTM sequence to sequence architectures: SketchRNN and derivatives.

References

SHOWING 1-10 OF 41 REFERENCES

WhittleSearch: Image search with relative attribute feedback

TLDR
A novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his/her mental model of the image(s) sought, which outperforms traditional binary relevance feedback in terms of search speed and accuracy.

Attribute Pivots for Guiding Relevance Feedback in Image Search

TLDR
This work proposes to actively select "pivot" exemplars for which feedback in the form of a visual comparison will most reduce the system's uncertainty, and relies on a series of binary search trees in relative attribute space together with a selection function that predicts the information gain.

Sketch Me That Shoe

TLDR
A deep tripletranking model for instance-level SBIR is developed with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data.

Sketching with Style: Visual Search with Sketches and Aesthetic Context

TLDR
A triplet network is used to learn a feature embedding capable of measuring style similarity independent of structure, delivering significant gains over previous networks for style discrimination.

Interactive video asset retrieval using sketched queries

TLDR
A new algorithm for searching video repositories using free-hand sketches is presented, creating an efficiently searchable index via a novel space-time descriptor that encapsulates all appearance and motion attributes and semantic properties.

Query Adaptive Instance Search using Object Sketches

TLDR
The proposed query adaptive sketch-based object search is formulated as a sub-graph selection problem, which can be solved by maximum flow algorithm and can accurately locate the small target objects in cluttered background or densely drawn deformation intensive cartoon images.

Generalisation and Sharing in Triplet Convnets for Sketch based Visual Search

TLDR
This work proposes and evaluates several triplet CNN architectures for measuring the similarity between sketches and photographs, within the context of the sketch based image retrieval (SBIR) task, and studies the ability of these networks to generalise across diverse object categories from limited training data.

Sketch-based 3D shape retrieval using Convolutional Neural Networks

  • Fang WangLe KangYi Li
  • Computer Science
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
TLDR
This work drastically reduces the number of views to only two predefined directions for the whole dataset, and learns two Siamese Convolutional Neural Networks, one for the views and one forThe sketches, which is significantly better than state of the art approaches, and outperforms them in all conventional metrics.

Sketch-based image retrieval via Siamese convolutional neural network

TLDR
A novel convolutional neural network based on Siamese network for SBIR is proposed, which is to pull output feature vectors closer for input sketch-image pairs that are labeled as similar, and push them away if irrelevant.