Multimodal Similarity-Preserving Hashing


We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks.

DOI: 10.1109/TPAMI.2013.225

Extracted Key Phrases

11 Figures and Tables

Citations per Year

69 Citations

Semantic Scholar estimates that this publication has 69 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Masci2014MultimodalSH, title={Multimodal Similarity-Preserving Hashing}, author={Jonathan Masci and Michael M. Bronstein and Alexander M. Bronstein and J{\"{u}rgen Schmidhuber}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2014}, volume={36}, pages={824-830} }