Corpus ID: 204901036

LPRNet: Lightweight Deep Network by Low-rank Pointwise Residual Convolution

@article{Sun2019LPRNetLD,
  title={LPRNet: Lightweight Deep Network by Low-rank Pointwise Residual Convolution},
  author={Bin Sun and Jun Li and Ming Shao and Yun Raymond Fu},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.11853}
}
Deep learning has become popular in recent years primarily due to the powerful computing device such as GPUs. However, deploying these deep models to end-user devices, smart phones, or embedded systems with limited resources is challenging. To reduce the computation and memory costs, we propose a novel lightweight deep learning module by low-rank pointwise residual (LPR) convolution, called LPRNet. Essentially, LPR aims at using low-rank approximation in pointwise convolution to further reduce… Expand
Block Mobilenet: Align Large-Pose Faces with <1MB Model Size
  • Bin Sun, Jun Li, Y. Fu
  • Computer Science
  • 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)
  • 2020
TLDR
A novel Depthwise Separable Block (DSB) which consists of a depthwise block and a pointwise block which has better overall performance than the state-of-the-art methods. Expand
Refined CNNs for Face Recognition Applications on Embedded Devices
TLDR
This paper refined an efficient CNN architecture for face verification with extreme efficiency for real-time face applications in embedded environment, namely R-MobileFaceNet and proposed Dynamically Fuzzy Image Dataset, namely DFID, to evaluate the capacity of the models to deploy on embedded platforms. Expand

References

SHOWING 1-10 OF 57 REFERENCES
IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks
TLDR
It is empirically demonstrate that the combination of low-rank and sparse kernels boosts the performance and the superiority of the proposed approach to the state-of-the-arts, IGCV2 and MobileNetV2 over image classification on CIFAR and ImageNet and object detection on COCO. Expand
Learning Structured Sparsity in Deep Neural Networks
TLDR
The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. Expand
Speeding up Convolutional Neural Networks with Low Rank Expansions
TLDR
Two simple schemes for drastically speeding up convolutional neural networks are presented, achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Expand
Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition
TLDR
A simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning is proposed, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Expand
ESPNetv2: A Light-Weight, Power Efficient, and General Purpose Convolutional Neural Network
We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wiseExpand
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieveExpand
Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
TLDR
A new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes is described. Expand
CondenseNet: An Efficient DenseNet Using Learned Group Convolutions
TLDR
CondenseNet is developed, a novel network architecture with unprecedented efficiency that combines dense connectivity with a novel module called learned group convolution, allowing for efficient computation in practice. Expand
...
1
2
3
4
5
...