Deep Semantic Space with Intra-class Low-rank Constraint for Cross-modal Retrieval

@article{Kang2019DeepSS,
  title={Deep Semantic Space with Intra-class Low-rank Constraint for Cross-modal Retrieval},
  author={Peipei Kang and Zehang Lin and Zhenguo Yang and Xiaozhao Fang and Qing Li and Wenyin Liu},
  journal={Proceedings of the 2019 on International Conference on Multimedia Retrieval},
  year={2019}
}
In this paper, a novel Deep Semantic Space learning model with Intra-class Low-rank constraint (DSSIL) is proposed for cross-modal retrieval, which is composed of two subnetworks for modality-specific representation learning, followed by projection layers for common space mapping. [...] Key Method More formally, two regularization terms are devised for the two aspects, which have been incorporated into the objective of DSSIL.Expand
Intra-class low-rank regularization for supervised and semi-supervised cross-modal retrieval
TLDR
Two deep models based on intra-class low-rank regularization based on ILCMR and Semi-ILCMR are proposed for supervised and semi-supervised cross-modal retrieval, respectively, demonstrating the superiority of these methods over other state-of-the-art methods. Expand
Modality-specific and shared generative adversarial network for cross-modal retrieval
TLDR
Experiments on three widely used benchmark multi-modal datasets demonstrate that MS2GAN can outperform state-of-the-art related works. Expand
Learning discriminative hashing codes for cross-modal retrieval based on multi-view features
TLDR
A discrete hashing learning framework that jointly performs classifier learning and subspace learning is proposed to complete multiple search tasks simultaneously and indicates the superiority of the method compared with the state-of-the-art methods. Expand
Cross-Modal Search for Social Networks via Adversarial Learning
TLDR
This paper adopts self-attention-based neural networks to generate modality-oriented representations for further intermodal correlation learning and proposes a cross-modal search method for social network data that capitalizes on adversarial learning. Expand

References

SHOWING 1-10 OF 36 REFERENCES
Multi-Networks Joint Learning for Large-Scale Cross-Modal Retrieval
TLDR
A novel deep framework of multi-networks joint learning for large-scale cross-modal retrieval that can simultaneously achieve specific features adapting to cross- modal task and learn a shared latent space for images and sentences is proposed. Expand
Supervised Group Sparse Representation via Intra-class Low-Rank Constraint
TLDR
This paper proposes a novel supervised group sparse representation via intra-class low-rank constraint (GSRILC), which attempts to use the compact projection features in a new subspace for data reconstruction. Expand
Cross-Modal Event Retrieval: A Dataset and a Baseline Using Deep Semantic Learning
TLDR
The DSS outperforms the state-of-the-art approaches on both the Pascal Sentence dataset and the Wiki-Flickr event dataset and can be mapped into a high-level semantic space, in which the distance between data samples can be measured straightforwardly for cross-model event retrieval. Expand
Cross-Modal Retrieval via Deep and Bidirectional Representation Learning
TLDR
A deep and bidirectional representation learning model is proposed to address the issue of image-text cross-modal retrieval and shows that the proposed architecture is effective and the learned representations have good semantics to achieve superior cross- modal retrieval performance. Expand
Multi-label Cross-Modal Retrieval
TLDR
Multi-label Canonical Correlation Analysis (ml-CCA), an extension of CCA, is introduced for learning shared subspaces taking into account high level semantic information in the form of multi-label annotations, which results in a discriminative subspace which is better suited for cross-modal retrieval tasks. Expand
CCL: Cross-modal Correlation Learning With Multigrained Fusion by Hierarchical Network
TLDR
This paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and compares with 13 state-of-the-art methods on 6 widely-used cross- modal datasets the experimental results show the CCL approach achieves the best performance. Expand
Generative Zero-Shot Learning via Low-Rank Embedded Semantic Dictionary
TLDR
Two-stage generative adversarial networks are designed to enhance the generalizability of semantic dictionary through low-rank embedding for zero-shot learning and could capture a variety of visual characteristics from seen classes that are “ready-to-use” for new classes. Expand
Deep Transfer Low-Rank Coding for Cross-Domain Learning
  • Zhengming Ding, Y. Fu
  • Computer Science, Medicine
  • IEEE Transactions on Neural Networks and Learning Systems
  • 2019
TLDR
A novel deep transfer low-rank coding based on deep convolutional neural networks, where multilayer common dictionaries shared across two domains are obtained to bridge the domain gap such that more enriched domain-invariant knowledge can be captured through a layerwise fashion. Expand
Learning Shared Semantic Space with Correlation Alignment for Cross-Modal Event Retrieval
TLDR
The effectiveness of S3CA, which aligns nonlinear correlations of multimodal data distributions in deep neural networks designed for heterogeneous data, is outperforming the state-of-the-art methods. Expand
Cross-modal Retrieval with Correspondence Autoencoder
The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) isExpand
...
1
2
3
4
...