Corpus ID: 53829043

WaveletNet: Logarithmic Scale Efficient Convolutional Neural Networks for Edge Devices

@article{Jing2018WaveletNetLS,
  title={WaveletNet: Logarithmic Scale Efficient Convolutional Neural Networks for Edge Devices},
  author={Li Jing and Rumen Dangovski and Marin Solja{\vc}i{\'c}},
  journal={ArXiv},
  year={2018},
  volume={abs/1811.11644}
}
We present a logarithmic-scale efficient convolutional neural network architecture for edge devices, named WaveletNet. Our model is based on the well-known depthwise convolution, and on two new layers, which we introduce in this work: a wavelet convolution and a depthwise fast wavelet transform. By breaking the symmetry in channel dimensions and applying a fast algorithm, WaveletNet shrinks the complexity of convolutional blocks by an O(logD/D) factor, where D is the number of channels… Expand

References

SHOWING 1-10 OF 36 REFERENCES
Factorized Convolutional Neural Networks
TLDR
The proposed convolutional layer is composed of a low-cost single intra-channel convolution and a linear channel projection that can effectively preserve the spatial information and maintain the accuracy with significantly less computation. Expand
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
TLDR
An extremely computation-efficient CNN architecture named ShuffleNet is introduced, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs), to greatly reduce computation cost while maintaining accuracy. Expand
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
TLDR
This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. Expand
Spectral Representations for Convolutional Neural Networks
TLDR
This work proposes spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain, and demonstrates the effectiveness of complex-coefficient spectral parameterization of convolutional filters. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset. Expand
The fast Haar transform
Wavelet theory and its relatives (subband coding, filter banks and multiresolution analysis) have become hot this last decade. Like the sinusoids in Fourier analysis, wavelets form bases that canExpand
Interleaved Group Convolutions for Deep Neural Networks
TLDR
This paper presents a simple and modularized neural network architecture, named interleaved group convolutional neural networks (IGCNets), and discusses one representative advantage: Wider than a regular convolution with the number of parameters and the computation complexity preserved. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks
TLDR
It is empirically demonstrate that the combination of low-rank and sparse kernels boosts the performance and the superiority of the proposed approach to the state-of-the-arts, IGCV2 and MobileNetV2 over image classification on CIFAR and ImageNet and object detection on COCO. Expand
...
1
2
3
4
...