Semi-supervised hash learning method with consistency-based dimensionality reduction

@article{Lv2019SemisupervisedHL,
  title={Semi-supervised hash learning method with consistency-based dimensionality reduction},
  author={Fang Lv and Yuliang Wei and Xixian Han and Bailing Wang},
  journal={Advances in Mechanical Engineering},
  year={2019},
  volume={11}
}
With the explosive growth of surveillance data, exact match queries become much more difficult for its high dimension and high volume. Owing to its good balance between the retrieval performance and the computational cost, hash learning technique is widely used in solving approximate nearest neighbor search problems. Dimensionality reduction plays a critical role in hash learning, as its target is to preserve the most original information into low-dimensional vectors. However, the existing… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 27 REFERENCES

Large-scale image retrieval with supervised sparse hashing

Semi-supervised hashing for scalable image retrieval

TLDR
This work proposes a semi-supervised hashing method that is formulated as minimizing empirical error on the labeled data while maximizing variance and independence of hash bits over the labeled and unlabeled data.

Semi-Supervised Hashing for Large-Scale Search

TLDR
This work proposes a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets and presents three different semi- supervised hashing methods, including orthogonal hashing, nonorthogonal hash, and sequential hashing.

Large-Scale Unsupervised Hashing with Shared Structure Learning

TLDR
To make hash functions not only preserve the local neighborhood structure but also capture the global cluster distribution of the whole data, an objective function incorporating spectral embedding loss, binary quantization loss, and shared subspace contribution is introduced to guide the hash function learning.

Supervised Discrete Hashing

TLDR
This work proposes a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification, and introduces an auxiliary variable to reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm.

Sparse Semantic Hashing for Efficient Large Scale Similarity Search

TLDR
A unified framework is designed for ensuring the hidden semantic structure among the documents by a sparse coding model, while at the same time preserving the document similarity via graph Laplacian and an iterative coordinate descent procedure is proposed for solving the optimization problem.

Kernel-Based Supervised Discrete Hashing for Image Retrieval

TLDR
A novel yet simple kernel-based supervised discrete hashing method via an asymmetric relaxation strategy that can effectively and stably preserve the similarity of neighbors in a low-dimensional Hamming space and its superior performance over the state of the arts.

Hashing with Graphs

TLDR
This paper proposes a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes and describes a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy.

Feature Learning Based Deep Supervised Hashing with Pairwise Labels

TLDR
Experiments show that the proposed deep pairwise-supervised hashing method (DPSH), to perform simultaneous feature learning and hashcode learning for applications with pairwise labels, can outperform other methods to achieve the state-of-the-art performance in image retrieval applications.

Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval

TLDR
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections by proposing a simple and efficient alternating minimization algorithm, dubbed iterative quantization (ITQ), and demonstrating an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.