ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

@article{Zhang2017ShuffleNetAE,
  title={ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices},
  author={Xiangyu Zhang and Xinyu Zhou and Mengxiao Lin and Jian Sun},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2017},
  pages={6848-6856}
}
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g… CONTINUE READING

Similar Papers

Citations

Publications citing this paper.
SHOWING 1-10 OF 425 CITATIONS

Omni-Scale Feature Learning for Person Re-Identification

VIEW 6 EXCERPTS
CITES RESULTS, METHODS & BACKGROUND
HIGHLY INFLUENCED

A Comparative Study of Real-Time Semantic Segmentation for Autonomous Driving

  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2018
VIEW 15 EXCERPTS
CITES RESULTS, BACKGROUND & METHODS
HIGHLY INFLUENCED

Face Recognition: Primates in the Wild

  • 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS)
  • 2018
VIEW 4 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN Inference Engine

  • IEEE Journal on Emerging and Selected Topics in Circuits and Systems
  • 2018
VIEW 5 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

MBS: Macroblock Scaling for CNN Model Reduction

VIEW 8 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2014
2019

CITATION STATISTICS

  • 96 Highly Influenced Citations

  • Averaged 141 Citations per year from 2017 through 2019

References

Publications referenced by this paper.
SHOWING 1-10 OF 48 REFERENCES

Going deeper with convolutions

  • 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2014
VIEW 8 EXCERPTS
HIGHLY INFLUENTIAL

Squeeze-and-Excitation Networks

  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2017
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Aggregated Residual Transformations for Deep Neural Networks

  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
VIEW 10 EXCERPTS
HIGHLY INFLUENTIAL

Xception: Deep Learning with Depthwise Separable Convolutions

  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
VIEW 10 EXCERPTS
HIGHLY INFLUENTIAL

Deep Residual Learning for Image Recognition

  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
VIEW 14 EXCERPTS
HIGHLY INFLUENTIAL

ImageNet Classification with Deep Convolutional Neural Networks

Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
  • NIPS 2012
  • 2012
VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Inceptionv4

C. Szegedy, S. Ioffe, V. Vanhoucke, A. Alemi
  • inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261
  • 2016
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Rethinking the Inception Architecture for Computer Vision

  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL