Efficient Training of Very Deep Neural Networks for Supervised Hashing

  title={Efficient Training of Very Deep Neural Networks for Supervised Hashing},
  author={Ziming Zhang and Yuting Chen and Venkatesh Saligrama},
  journal={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
In this paper, we propose training very deep neural networks (DNNs) for supervised learning of hash codes. Existing methods in this context train relatively "shallow" networks limited by the issues arising in back propagation (e.g. vanishing gradients) as well as computational efficiency. We propose a novel and efficient training algorithm inspired by alternating direction method of multipliers (ADMM) that overcomes some of these limitations. Our method decomposes the training process into… 

Figures from this paper

Deep Supervised Discrete Hashing

This paper develops a deep supervised discrete hashing algorithm based on the assumption that the learned binary codes should be ideal for classification, which outperforms current state-of-the-art methods on benchmark datasets.

Greedy Hash: Towards Fast Optimization for Accurate Hash Coding in CNN

This work adopts the greedy principle to tackle this NP hard problem by iteratively updating the network toward the probable optimal discrete solution in each iteration, and provides a new perspective to visualize and understand the effectiveness and efficiency of the algorithm.

Deep Reinforcement Learning with Label Embedding Reward for Supervised Image Hashing

This work introduces a novel decision-making approach for deep supervised hashing, formulate the hashing problem as travelling across the vertices in the binary code space, and learns a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes to explore the best path.

Triplet Deep Hashing with Joint Supervised Loss Based on Deep Neural Networks

The proposed triplet deep hashing method with joint supervised loss based on the convolutional neural network (JLTDH) combines triplet likelihood loss and linear classification loss and the triplet supervised label is adopted, which contains richer supervised information than that of the pointwise and pairwise labels.

Deep Variational and Structural Hashing

A probabilistic framework to infer latent feature representation inside the network to obtain binary codes through a simple encoding procedure, and designs modality-specific hashing networks to handle the out-of-sample extension scenario.

A General Framework for Deep Supervised Discrete Hashing

A general deep supervised discrete hashing framework based on the assumption that the learned binary codes should be ideal for classification, which outperforms current state-of-the-art methods on benchmark datasets.

Deep Supervised Hashing for Fast Image Retrieval

A novel Deep Supervised Hashing method to learn compact similarity-preserving binary code for the huge body of image data using pairs/triplets of images as training inputs and encouraging the output of each image to approximate discrete values.

Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization

This work proposes a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization.

Fast Scalable Supervised Hashing

A novel supervised hashing method, called Fast Scalable Supervised Hashing (FSSH), which circumvents the use of the large similarity matrix by introducing a pre-computed intermediate term whose size is independent with the size of training data.

Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features

A deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, it leverage hierarchical convolutional features to construct image pyramid representation and a new loss function is proposed that maintains the semantic similarity and balanceable property of hash codes.



Supervised hashing with kernels

A novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing, and significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors is proposed.

Learning to Hash with Binary Reconstructive Embeddings

An algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings is developed.

Supervised Discrete Hashing

This work proposes a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification, and introduces an auxiliary variable to reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm.

Deep hashing for compact binary codes learning

A deep neural network is developed to seek multiple hierarchical non-linear transformations to learn compact binary codes for large scale visual search and shows the superiority of the proposed approach over the state-of-the-arts.

Fast Supervised Hashing with Decision Trees for High-Dimensional Data

Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time, and is orders of magnitude faster than many methods in terms of training time.

Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification

A supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images that outperforms state-of-the-arts on public benchmarks of similar image search and achieves promising results in the application of person re-identification in surveillance.

Sparse Convolutional Neural Networks

This work shows how to reduce the redundancy in these parameters using a sparse decomposition, and proposes an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.

An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections

We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure

Learning both Weights and Connections for Efficient Neural Network

A method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections, and prunes redundant connections using a three-step method.