Semantic-Aware Knowledge Preservation for Zero-Shot Sketch-Based Image Retrieval

@article{Liu2019SemanticAwareKP,
  title={Semantic-Aware Knowledge Preservation for Zero-Shot Sketch-Based Image Retrieval},
  author={Qing Liu and Lingxi Xie and Huiyu Wang and Alan Loddon Yuille},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={3661-3670}
}
  • Qing Liu, Lingxi Xie, +1 author A. Yuille
  • Published 2019
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
Sketch-based image retrieval (SBIR) is widely recognized as an important vision problem which implies a wide range of real-world applications. [...] Key Method For this purpose, we design an approach named Semantic-Aware Knowledge prEservation (SAKE), which fine-tunes the pre-trained model in an economical way and leverages semantic information, e.g., inter-class relationship, to achieve the goal of knowledge preservation. Zero-shot experiments on two extended SBIR datasets, TU-Berlin and Sketchy, verify the…Expand
An Efficient Framework for Zero-Shot Sketch-Based Image Retrieval
TLDR
This work proposes a simple and efficient framework that does not require high computational training resources, and can be trained on datasets without semantic categorical labels, and at training and inference stages only uses a single CNN. Expand
Progressive Domain-Independent Feature Decomposition Network for Zero-Shot Sketch-Based Image Retrieval
TLDR
A Progressive Domain-independent Feature Decomposition (PDFD) network for ZS-SBIR is proposed, with the supervision of original semantic knowledge, PDFD decomposes visual features into domain features and semantic ones, and then the semantic features are projected into common space as retrieval features for SBIR. Expand
A Simplified Framework for Zero-shot Cross-Modal Sketch Data Retrieval
TLDR
A multi-stream encoder-decoder model is proposed that simultaneously ensures improved mapping between the RGB and sketch image spaces and high discrimination in the shared semantics-driven encoded feature space, which subsequently reduces the model bias towards the training classes. Expand
CrossATNet - a novel cross-attention based framework for sketch-based image retrieval
TLDR
A novel framework for cross-modal zero-shot learning (ZSL) in the context of sketch-based image retrieval (SBIR) is introduced and an innovative cross- modal attention learning strategy is proposed to guide feature extraction from the image domain exploiting information from the respective sketch counterpart. Expand
Domain-Smoothing Network for Zero-Shot Sketch-Based Image Retrieval
TLDR
A cross-modal contrastive method is proposed to learn generalized representations to smooth the domain gap by mining relations with additional augmented samples in ZS-SBIR and a category-specific memory bank with sketch features is explored to reduce intra-class diversity in the sketch domain. Expand
Norm-guided Adaptive Visual Embedding for Zero-Shot Sketch-Based Image Retrieval
TLDR
A novel Norm-guided Adaptive Visual Embedding model, for adaptively building the common space based on visual similarity instead of languagebased pre-defined prototypes, is proposed, which demonstrates the superiority of the NAVE over state-of-the-art competitors. Expand
Marginalized Graph Attention Hashing for Zero-Shot Image Retrieval
TLDR
A novel deep zero-shot hashing method, named Marginalized Graph Attention Hashing (MGAH), which introduces the masked attention mechanism to construct a joint-semantics similarity graph, which captures the intrinsic relationship from different metric spaces, making it competent to transfer knowledge from seen classes into unseen classes. Expand
Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-Based Image Retrieval
TLDR
A semantically aligned paired cycle-consistent generative adversarial network (SEM-PCYC) for any-shot SBIR, where each branch of the generative adversary maps the visual information from sketch and image to a common semantic space via adversarial training. Expand
Semantically Tied Paired Cycle Consistency for Zero-Shot Sketch-Based Image Retrieval
  • A. Dutta, Zeynep Akata
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
A semantically aligned paired cycle-consistent generative (SEM-PCYC) model for zero-shot SBIR, where each branch maintains a cycle consistency that only requires supervision at category levels, and avoids the need of highly-priced aligned sketch-image pairs. Expand
Bi-Directional Domain Translation for Zero-Shot Sketch-Based Image Retrieval
TLDR
A Bi-directional Domain Translation (BDT) framework is proposed for ZS-SBIR, in which the image domain and sketch domain can be translated to each other through disentangled structure and appearance features to facilitate structure-based retrieval. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 51 REFERENCES
A Zero-Shot Framework for Sketch-based Image Retrieval
TLDR
Experiments on this new benchmark created from the “Sketchy” dataset demonstrate that the performance of these generative models is significantly better than several state-of-the-art approaches in the proposed zero-shot framework of the coarse-grained SBIR task. Expand
Learning Large Euclidean Margin for Sketch-based Image Retrieval
TLDR
A novel loss function, named Euclidean Margin Softmax (EMS), is proposed that not only minimizes intra-class distances but also maximizes inter- class distances simultaneously simultaneously, enabling us to learn a feature space with high discriminability, leading to highly accurate retrieval. Expand
Semantically Tied Paired Cycle Consistency for Zero-Shot Sketch-Based Image Retrieval
  • A. Dutta, Zeynep Akata
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
A semantically aligned paired cycle-consistent generative (SEM-PCYC) model for zero-shot SBIR, where each branch maintains a cycle consistency that only requires supervision at category levels, and avoids the need of highly-priced aligned sketch-image pairs. Expand
Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval
TLDR
A novel deep FG-SBIR model is proposed which differs significantly from the existing models in that it is spatially aware, achieved by introducing an attention module that is sensitive to the spatial position of visual details and combines coarse and fine semantic information via a shortcut connection fusion block. Expand
Zero-Shot Sketch-Image Hashing
TLDR
ZSIH is the first zero- shot hashing work suitable for SBIR and cross-modal search and forms a generative hashing scheme in reconstructing semantic knowledge representations for zero-shot retrieval. Expand
Sketch Me That Shoe
TLDR
A deep tripletranking model for instance-level SBIR is developed with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data. Expand
Sketch based Image Retrieval using Learned KeyShapes (LKS)
TLDR
This work presents a novel method for describing sketches based on detecting mid-level patterns called learned keyshapes and shows that this method allows us to achieve good performance even when the authors use around 20% of the sketch content. Expand
Semantic Autoencoder for Zero-Shot Learning
TLDR
This work presents a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE), which outperforms significantly the existing ZSL models with the additional benefit of lower computational cost and beats the state-of-the-art when the SAE is applied to supervised clustering problem. Expand
Deep Sketch Hashing: Fast Free-Hand Sketch-Based Image Retrieval
TLDR
This paper introduces a novel binary coding method, named Deep Sketch Hashing (DSH), where a semi-heterogeneous deep architecture is proposed and incorporated into an end-to-end binary coding framework, and is the first hashing work specifically designed for category-level SBIR with an end to end deep architecture. Expand
Sketch-based image retrieval via Siamese convolutional neural network
TLDR
A novel convolutional neural network based on Siamese network for SBIR is proposed, which is to pull output feature vectors closer for input sketch-image pairs that are labeled as similar, and push them away if irrelevant. Expand
...
1
2
3
4
5
...