Dynamic Multi-path Neural Network

@article{Su2019DynamicMN,
  title={Dynamic Multi-path Neural Network},
  author={Yingcheng Su and Shunfeng Zhou and Yichao Wu and Xuebo Liu and Tian Su and Ding Liang and Junjie Yan},
  journal={2020 25th International Conference on Pattern Recognition (ICPR)},
  year={2019},
  pages={4137-4144}
}
Although deeper and larger neural networks have achieved better performance, the complex network structure and increasing computational cost cannot meet the demands of many resource-constrained applications. [] Key Method The inference path of the network is determined by a controller, which takes into account both previous state and object category information. The proposed method can be easily incorporated into most modern network architectures. Experimental results on ImageNet and CIFAR-100 demonstrate…
1 Citations

Figures and Tables from this paper

Traffic signs recognition using dynamic-scale CNN

An improved CNN-based structure to recognize the traffic signs using a dynamic priority algorithm, inspired by the dynamic priority scheduling in the operating system that emphases differences among various convolutional layers and trains the network from different scales of features with assigned priorities.

References

SHOWING 1-10 OF 45 REFERENCES

BlockDrop: Dynamic Inference Paths in Residual Networks

BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy, is introduced.

Channel Pruning for Accelerating Very Deep Neural Networks

  • Yihui HeXiangyu ZhangJian Sun
  • Computer Science
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
This paper proposes an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction, and generalizes this algorithm to multi-layer and multi-branch cases.

Pruning Convolutional Neural Networks for Resource Efficient Inference

It is shown that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier.

ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression

ThiNet is proposed, an efficient and unified framework to simultaneously accelerate and compress CNN models in both training and inference stages, and it is revealed that it needs to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods.

SkipNet: Learning Dynamic Routing in Convolutional Networks

This work introduces SkipNet, a modified residual network, that uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer, and proposes a hybrid learning algorithm that combines supervised learning and reinforcement learning to address the challenges of non-differentiable skipping decisions.

An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections

We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure

BranchyNet: Fast inference via early exiting from deep neural networks

The BranchyNet architecture is presented, a novel deep network architecture that is augmented with additional side branch classifiers that can both improve accuracy and significantly reduce the inference time of the network.

Learning Structured Sparsity in Deep Neural Networks

The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers.

Spatially Adaptive Computation Time for Residual Networks

Experimental results are presented showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets and the computation time maps on the visual saliency dataset cat2000 correlate surprisingly well with human eye fixation positions.

Multi-Scale Dense Networks for Resource Efficient Image Classification

Experiments demonstrate that the proposed framework substantially improves the existing state-of-the-art in both image classification with computational resource limits at test time and budgeted batch classification.