A Two-step Approach to Cross-modal Hashing

Abstract

With the rapid growth of multimedia data, it is very desirable to effectively and efficiently search objects of interest across different modalities from large scale databases. Cross-modal hashing provides a very promising way to address such problem. In this paper, we propose a two-step cross-modal hashing approach to obtain compact hash codes and learn hash functions from multimodal data. Our approach decomposes the cross-modal hashing problem into two steps: generating hash code and learning hash function. In the first step, we obtain the hash codes for all modalities of data via a joint multi-modal graph, which takes into consideration both the intra-modality and inter-modality similarity. In the second step, learning hashing function is formulated as a binary classification problem. We train binary classifiers to predict the hash code for any data object unseen before. Experimental results on two cross-modal datasets show the effectiveness of our proposed approach.

DOI: 10.1145/2671188.2749297

Extracted Key Phrases

2 Figures and Tables

Cite this paper

@inproceedings{Wang2015ATA, title={A Two-step Approach to Cross-modal Hashing}, author={Kaiye Wang and Wei Wang and Liang Wang and Ran He}, booktitle={ICMR}, year={2015} }