Merging and Evolution: Improving Convolutional Neural Networks for Mobile Applications

@article{Qin2018MergingAE,
  title={Merging and Evolution: Improving Convolutional Neural Networks for Mobile Applications},
  author={Zheng Qin and Z. Zhang and Shiqing Zhang and Hao Yu and Yuxing Peng},
  journal={2018 International Joint Conference on Neural Networks (IJCNN)},
  year={2018},
  pages={1-8}
}
Compact neural networks are inclined to exploit “sparsely-connected” convolutions such as depthwise convolution and group convolution for employment in mobile applications. Compared with standard “fully-connected” convolutions, these convolutions are more computationally economical. However, “sparsely-connected” convolutions block the inter-group informa-tion exchange, which induces severe performance degradation. To address this issue, we present two novel operations named merging and… Expand
Merging-and-Evolution Networks for Mobile Vision Applications
TLDR
This work presents two novel operations named merging and evolution to leverage the inter-group information and proposes a family of compact neural networks called MENet based on the ME modules, which consistently outperforms other state-of-the-art compact networks under different computational budgets. Expand
Deep Networks for Image-to-Image Translation with Mux and Demux Layers
TLDR
A lightweight end–to–end deep learning approach for image enhancement that could improve both the quantitative and qualitative assessments, as well as the performance, and gets the third place in PIRM Enhancement–On–Smartphones Challenge 2018. Expand
Unsupervised pre-trained filter learning approach for efficient convolution neural network
TLDR
A comprehensive survey of the relationship between ConvNet with different pre-trained learning methodologies and its optimization effects and the experimental results on the benchmark dataset highlight the merit of efficient pre- trained learning algorithms for optimized ConvNet. Expand

References

SHOWING 1-10 OF 32 REFERENCES
Aggregated Residual Transformations for Deep Neural Networks
TLDR
On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity. Expand
Channel Pruning for Accelerating Very Deep Neural Networks
  • Yihui He, X. Zhang, Jian Sun
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
TLDR
This paper proposes an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction, and generalizes this algorithm to multi-layer and multi-branch cases. Expand
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
TLDR
This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. Expand
Learning Efficient Convolutional Networks through Network Slimming
TLDR
The approach is called network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. Expand
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
TLDR
An extremely computation-efficient CNN architecture named ShuffleNet is introduced, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs), to greatly reduce computation cost while maintaining accuracy. Expand
Accelerating convolutional neural networks by group-wise 2D-filter pruning
TLDR
This work proposes a new group-wise 2D-fllter pruning approach that is orthogonal and complementary to the existing methods, and leads to compressed models that do not require sophisticated implementation of convolution operations. Expand
Fully Convolutional Networks for Semantic Segmentation
TLDR
It is shown that convolutional networks by themselves, trained end- to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Expand
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset. Expand
Accelerating Very Deep Convolutional Networks for Classification and Detection
TLDR
This paper aims to accelerate the test-time computation of convolutional neural networks, especially very deep CNNs, and develops an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). Expand
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
TLDR
The Binary-Weight-Network version of AlexNet is compared with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than \(16\,\%\) in top-1 accuracy. Expand
...
1
2
3
4
...