Supervised Hashing with Soft Constraints

@article{Leng2014SupervisedHW,
  title={Supervised Hashing with Soft Constraints},
  author={Cong Leng and Jian Cheng and Jiaxiang Wu and Xi Sheryl Zhang and Hanqing Lu},
  journal={Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management},
  year={2014}
}
  • Cong Leng, Jian Cheng, +2 authors Hanqing Lu
  • Published 2014
  • Computer Science
  • Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management
Due to the ability to preserve semantic similarity in Hamming space, supervised hashing has been extensively studied recently. Most existing approaches encourage two dissimilar samples to have maximum Hamming distance. This may lead to an unexpected consequence that two unnecessarily similar samples would have the same code if they are both dissimilar with another sample. Besides, in existing methods, all labeled pairs are treated with equal importance without considering the semantic gap… Expand
Supervised discrete hashing through similarity learning
TLDR
Different from the existing supervised hashing methods that learn hash codes from least-squares classification by regressing the hash codes to their corresponding labels, this work leverage the mutual relation between different semantic labels to learn more stable hash codes. Expand
Scalable Supervised Discrete Hashing for Large-Scale Search
TLDR
Scalable Supervised Discrete Hashing is presented, a novel hashing method that bypasses the direct optimization on the n by n pairwise similarity matrix and adopts no relaxation optimization scheme in the learning procedure and avoids the large quantization error problem. Expand
Sampling based Discrete Supervised Hashing
By leveraging semantic (label) information, supervised hashing has demonstrated better accuracy than unsupervised hashing in many real applications. Because the hashing-code learning problem isExpand
Column Sampling Based Discrete Supervised Hashing
TLDR
A novel method, called column sampling based discrete supervised hashing (COSDISH), to directly learn the discrete hashing code from semantic information and can outperform the state-of-the-art methods in real applications like image retrieval. Expand
Sequential Compact Code Learning for Unsupervised Image Hashing
  • L. Liu, L. Shao
  • Mathematics, Medicine
  • IEEE Transactions on Neural Networks and Learning Systems
  • 2016
TLDR
A novel unsupervised framework, termed evolutionary compact embedding (ECE), is introduced to automatically learn the task-specific binary hash codes, which can be regarded as an optimization algorithm that combines the genetic programming (GP) and a boosting trick. Expand
Error Correcting Input and Output Hashing
TLDR
This paper proposes a novel framework, error correcting input and output (EC-IO) coding, which does class-level and instance-level encoding under a unified mapping space, and presents the hashing model, EC-IOH, by approximating the mapping space with the Hamming space. Expand
Asymmetric Deep Supervised Hashing
TLDR
This paper proposes a novelDeep supervised hashing method, called asymmetric deep supervised hashing (ADSH), for large-scale nearest neighbor search, which treats the query points and database points in an asymmetric way. Expand
Partial Hash Update via Hamming Subspace Learning
TLDR
A method for Hamming subspace learning based on greedy selection strategy and the Distribution Preserving Hamming Subspace learning (DHSL) algorithm by designing a novel loss function to improve the speed of online updating and the performance of hashing algorithm. Expand
Learning Short Binary Codes for Large-scale Image Retrieval
TLDR
This paper proposes a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes for scalable image retrieval tasks, which can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval. Expand

References

SHOWING 1-8 OF 8 REFERENCES
Supervised hashing with kernels
TLDR
A novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing, and significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors is proposed. Expand
Learning to Hash with Binary Reconstructive Embeddings
TLDR
An algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings is developed. Expand
A General Two-Step Approach to Learning-Based Hashing
TLDR
This framework decomposes the hashing learning problem into two steps: hash bit learning and hash function learning based on the learned bits, and can typically be formulated as binary quadratic problems, and the second step can be accomplished by training standard binary classifiers. Expand
Comparing apples to oranges: a scalable solution with heterogeneous hashing
TLDR
This paper proposes a novel Relation-aware Heterogeneous Hashing (RaHH), which provides a general framework for generating hash codes of data entities sitting in multiple heterogeneous domains and encodes both homogeneous and heterogeneous relationships between the data entities to design hash functions with improved accuracy. Expand
Distance Metric Learning with Application to Clustering with Side-Information
TLDR
This paper presents an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in �”n, learns a distance metric over ℝn that respects these relationships. Expand
Iterative quantization: A procrustean approach to learning binary codes
TLDR
A simple and efficient alternating minimization scheme for finding a rotation of zero- centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube is proposed. Expand
Minimal Loss Hashing for Compact Binary Codes
TLDR
The formulation is based on structured prediction with latent variables and a hinge-like loss function that is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods. Expand
Similarity estimation techniques from rounding algorithms
TLDR
It is shown that rounding algorithms for LPs and SDPs used in the context of approximation algorithms can be viewed as locality sensitive hashing schemes for several interesting collections of objects. Expand