Regularised Cross-Modal Hashing


In this paper we propose Regularised Cross-Modal Hashing (RCMH) a new cross-modal hashing model that projects annotation and visual feature descriptors into a common Hamming space. RCMH optimises the hashcode similarity of related data-points in the annotation modality using an iterative three-step hashing algorithm: in the first step each training image is assigned a K-bit hashcode based on hyperplanes learnt at the previous iteration; in the second step the binary bits are smoothed by a formulation of graph regularisation so that similar data-points have similar bits; in the third step a set of binary classifiers are trained to predict the regularised bits with maximum margin. Visual descriptors are projected into the annotation Hamming space by a set of binary classifiers learnt using the bits of the corresponding annotations as labels. RCMH is shown to consistently improve retrieval effectiveness over state-of-the-art baselines.

DOI: 10.1145/2766462.2767816

Extracted Key Phrases

4 Figures and Tables

Cite this paper

@inproceedings{Moran2015RegularisedCH, title={Regularised Cross-Modal Hashing}, author={Sean Moran and Victor Lavrenko}, booktitle={SIGIR}, year={2015} }