Efficient Training of Very Deep Neural Networks for Supervised Hashing

@article{Zhang2016EfficientTO,
  title={Efficient Training of Very Deep Neural Networks for Supervised Hashing},
  author={Ziming Zhang and Yuting Chen and Venkatesh Saligrama},
  journal={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2016},
  pages={1487-1495}
}
In this paper, we propose training very deep neural networks (DNNs) for supervised learning of hash codes. Existing methods in this context train relatively "shallow" networks limited by the issues arising in back propagation (e.g. vanishing gradients) as well as computational efficiency. We propose a novel and efficient training algorithm inspired by alternating direction method of multipliers (ADMM) that overcomes some of these limitations. Our method decomposes the training process into… Expand
Deep Supervised Discrete Hashing
TLDR
This paper develops a deep supervised discrete hashing algorithm based on the assumption that the learned binary codes should be ideal for classification, which outperforms current state-of-the-art methods on benchmark datasets. Expand
Greedy Hash: Towards Fast Optimization for Accurate Hash Coding in CNN
TLDR
This work adopts the greedy principle to tackle this NP hard problem by iteratively updating the network toward the probable optimal discrete solution in each iteration, and provides a new perspective to visualize and understand the effectiveness and efficiency of the algorithm. Expand
Deep Reinforcement Learning with Label Embedding Reward for Supervised Image Hashing
TLDR
This work introduces a novel decision-making approach for deep supervised hashing, formulate the hashing problem as travelling across the vertices in the binary code space, and learns a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes to explore the best path. Expand
Triplet Deep Hashing with Joint Supervised Loss Based on Deep Neural Networks
TLDR
The proposed triplet deep hashing method with joint supervised loss based on the convolutional neural network (JLTDH) combines triplet likelihood loss and linear classification loss and the triplet supervised label is adopted, which contains richer supervised information than that of the pointwise and pairwise labels. Expand
Deep Variational and Structural Hashing
TLDR
A probabilistic framework to infer latent feature representation inside the network to obtain binary codes through a simple encoding procedure, and designs modality-specific hashing networks to handle the out-of-sample extension scenario. Expand
A General Framework for Deep Supervised Discrete Hashing
TLDR
A general deep supervised discrete hashing framework based on the assumption that the learned binary codes should be ideal for classification, which outperforms current state-of-the-art methods on benchmark datasets. Expand
Deep Supervised Hashing for Fast Image Retrieval
TLDR
A novel Deep Supervised Hashing method to learn compact similarity-preserving binary code for the huge body of image data using pairs/triplets of images as training inputs and encouraging the output of each image to approximate discrete values. Expand
Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization
TLDR
This work proposes a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization. Expand
Fast Scalable Supervised Hashing
TLDR
A novel supervised hashing method, called Fast Scalable Supervised Hashing (FSSH), which circumvents the use of the large similarity matrix by introducing a pre-computed intermediate term whose size is independent with the size of training data. Expand
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features
TLDR
A deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, it leverage hierarchical convolutional features to construct image pyramid representation and a new loss function is proposed that maintains the semantic similarity and balanceable property of hash codes. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 67 REFERENCES
Supervised hashing with kernels
TLDR
A novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing, and significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors is proposed. Expand
Learning to Hash with Binary Reconstructive Embeddings
TLDR
An algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings is developed. Expand
Supervised Discrete Hashing
TLDR
This work proposes a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification, and introduces an auxiliary variable to reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. Expand
Deep hashing for compact binary codes learning
TLDR
A deep neural network is developed to seek multiple hierarchical non-linear transformations to learn compact binary codes for large scale visual search and shows the superiority of the proposed approach over the state-of-the-arts. Expand
Fast Supervised Hashing with Decision Trees for High-Dimensional Data
TLDR
Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time, and is orders of magnitude faster than many methods in terms of training time. Expand
Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification
TLDR
A supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images that outperforms state-of-the-arts on public benchmarks of similar image search and achieves promising results in the application of person re-identification in surveillance. Expand
Sparse Convolutional Neural Networks
TLDR
This work shows how to reduce the redundancy in these parameters using a sparse decomposition, and proposes an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Expand
An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structureExpand
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Expand
Learning both Weights and Connections for Efficient Neural Network
TLDR
A method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections, and prunes redundant connections using a three-step method. Expand
...
1
2
3
4
5
...