PR Product: A Substitute for Inner Product in Neural Networks

@article{Wang2019PRPA,
  title={PR Product: A Substitute for Inner Product in Neural Networks},
  author={Zhennan Wang and Wenbin Zou and Chen Xu},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={6012-6021}
}
In this paper, we analyze the inner product of weight vector w and data vector x in neural networks from the perspective of vector orthogonal decomposition and prove that the direction gradient of w decreases with the angle between them close to 0 or {\pi}. We propose the Projection and Rejection Product (PR Product) to make the direction gradient of w independent of the angle and consistently larger than the one in standard inner product while keeping the forward propagation identical. [] Key Method As a…

DMA Regularization: Enhancing Discriminability of Neural Networks by Decreasing the Minimal Angle

A novel discrimination regularization method for image classification, which enhances the intra-class compactness and inter-class discrepancy simultaneously, through decreasing the minimal angle (DMA) between the feature vector and any one of the weight vectors in classification layer.

ASL Recognition with Metric-Learning based Lightweight Network

This work proposes a lightweight network for ASL gesture recognition with a performance sufficient for practical applications and demonstrates impressive robustness on MS-ASL dataset and in live mode for continuous sign gesture recognition scenario.

Enhanced Feature Pyramid Networks by Feature Aggregation Module and Refinement Module

  • Xuan-Thuy VoK. Jo
  • Computer Science
    2020 13th International Conference on Human System Interaction (HSI)
  • 2020
The proposed method introduces Feature Aggregation Module (FAM) and Refinement Module (RM) to obtain more powerful feature pyramids for predicting objects of different scales and integrates the FAM and the RAM into the architecture of Faster R-CNN called EFPN FasterR-CNN.

References

SHOWING 1-10 OF 55 REFERENCES

Recurrent Fusion Network for Image Captioning

This paper proposes a novel recurrent fusion network (RFNet) for the image captioning task, which can exploit the interactions among the outputs of the image encoders and generate new compact and informative representations for the decoder.

Aggregated Residual Transformations for Deep Neural Networks

On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.

Deeply-Supervised Nets

The proposed deeply-supervised nets (DSN) method simultaneously minimizes classification error while making the learning process of hidden layers direct and transparent, and extends techniques from stochastic gradient methods to analyze the algorithm.

Densely Connected Convolutional Networks

The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.

Boosting Image Captioning with Attributes

This paper presents Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Identity Mappings in Deep Residual Networks

The propagation formulations behind the residual building blocks suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation.

Wide Residual Networks

This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts.

Deep Reinforcement Learning-Based Image Captioning with Embedding Reward

A novel decision-making framework for image captioning that combines a policy network and a value network to collaboratively generate captions and outperforms state-of-the-art approaches across different evaluation metrics.

Exploring Visual Relationship for Image Captioning

This paper introduces a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework that novelly integrates both semantic and spatial object relationships into image encoder.
...