Corpus ID: 4854050

Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression

@article{Wang2018ExploringLR,
  title={Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression},
  author={Dong Wang and L. Zhou and Xueni Zhang and Xiao Bai and J. Zhou},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.05729}
}
While the research on convolutional neural networks (CNNs) is progressing quickly, the real-world deployment of these models is often limited by computing resources and memory constraints. In this paper, we address this issue by proposing a novel filter pruning method to compress and accelerate CNNs. Our work is based on the linear relationship identified in different feature map subspaces via visualization of feature maps. Such linear relationship implies that the information in CNNs is… Expand
16 Citations
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
  • 247
  • PDF
Pruning Filter via Geometric Median for Deep Convolutional Neural Networks Acceleration
  • 21
Data Agnostic Filter Gating for Efficient Deep Networks
  • 1
  • PDF
A One-step Pruning-recovery Framework for Acceleration of Convolutional Neural Networks
  • PDF
Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks
  • 5
  • Highly Influenced
  • PDF
Similarity Based Filter Pruning for Efficient Super-Resolution Models
  • Chu Chu, Li Chen, Zhiyong Gao
  • Computer Science
  • 2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)
  • 2020
Channel Pruning for Accelerating Convolutional Neural Networks via Wasserstein Metric
  • PDF
Leveraging Filter Correlations for Deep Model Compression
  • 24
  • PDF
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
  • 22
  • Highly Influenced
  • PDF
Deep Network Pruning for Object Detection
  • 3
...
1
2
...

References

SHOWING 1-10 OF 39 REFERENCES
Beyond Filters: Compact Feature Map for Portable Deep Model
  • 41
  • PDF
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
  • 812
  • Highly Influential
  • PDF
Pruning Filters for Efficient ConvNets
  • 1,522
  • Highly Influential
  • PDF
Compressing Deep Convolutional Networks using Vector Quantization
  • 730
  • PDF
An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
  • 230
  • PDF
Channel Pruning for Accelerating Very Deep Neural Networks
  • Yihui He, X. Zhang, Jian Sun
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
  • 1,033
  • Highly Influential
  • PDF
Accelerating Very Deep Convolutional Networks for Classification and Detection
  • 411
  • PDF
Speeding up Convolutional Neural Networks with Low Rank Expansions
  • 952
  • PDF
Compact Deep Convolutional Neural Networks With Coarse Pruning
  • 39
  • PDF
...
1
2
3
4
...