• Corpus ID: 52896721

Fusion Hashing: A General Framework for Self-improvement of Hashing

  title={Fusion Hashing: A General Framework for Self-improvement of Hashing},
  author={Xingbo Liu and Xiushan Nie and Yilong Yin},
Hashing has been widely used for efficient similarity search based on its query and storage efficiency. To obtain better precision, most studies focus on designing different objective functions with different constraints or penalty terms that consider neighborhood information. In this paper, in contrast to existing hashing methods, we propose a novel generalized framework called fusion hashing (FH) to improve the precision of existing hashing methods without adding new constraints or penalty… 

Figures and Tables from this paper



Ranking Preserving Hashing for Fast Similarity Search

This paper proposes a novel Ranking Preserving Hashing (RPH) approach that directly optimizes a popular ranking measure, Normalized Discounted Cumulative Gain (NDCG), to obtain effective hashing codes with high ranking accuracy.

Semi-Supervised Hashing for Large-Scale Search

This work proposes a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets and presents three different semi- supervised hashing methods, including orthogonal hashing, nonorthogonal hash, and sequential hashing.

Supervised Hashing Using Graph Cuts and Boosted Decision Trees

The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods, and decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning.

Asymmetric Deep Supervised Hashing

This paper proposes a novelDeep supervised hashing method, called asymmetric deep supervised hashing (ADSH), for large-scale nearest neighbor search, which treats the query points and database points in an asymmetric way.

Supervised hashing with kernels

A novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing, and significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors is proposed.

Fast Supervised Discrete Hashing

This paper proposes a new learning-based hashing method called "fast supervised discrete hashing" (FSDH) based on “supervised discrete hashing” (SDH), which uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm.

Supervised Discrete Hashing

This work proposes a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification, and introduces an auxiliary variable to reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm.

Feature Learning Based Deep Supervised Hashing with Pairwise Labels

Experiments show that the proposed deep pairwise-supervised hashing method (DPSH), to perform simultaneous feature learning and hashcode learning for applications with pairwise labels, can outperform other methods to achieve the state-of-the-art performance in image retrieval applications.

Deep Asymmetric Pairwise Hashing

This work proposes a novel Deep Asymmetric Pairwise Hashing approach (DAPH) for supervised hashing, and devise an efficient alternating algorithm to optimize the asymmetric deep hash functions and high-quality binary code jointly.

A Survey on Learning to Hash

This paper presents a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise Similarity preserving, implicit similarity preserve, as well as quantization, and discusses their relations.