Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent
@article{Wang2019EnergyAwareNA, title={Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent}, author={Dilin Wang and Meng Li and L. Wu and Vikas Chandra and Qiang Liu}, journal={ArXiv}, year={2019}, volume={abs/1910.03103} }
Designing energy-efficient networks is of critical importance for enabling state-of-the-art deep learning in mobile and edge settings where the computation and energy budgets are highly limited. Recently, Liu et al. (2019) framed the search of efficient neural architectures into a continuous splitting process: it iteratively splits existing neurons into multiple off-springs to achieve progressive loss minimization, thus finding novel architectures by gradually growing the neural network… CONTINUE READING
Figures, Tables, and Topics from this paper
3 Citations
Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting
- Computer Science, Mathematics
- ArXiv
- 2020
- 2
- PDF
Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks
- Computer Science
- NeurIPS
- 2020
- 1
- PDF
A Spike in Performance: Training Hybrid-Spiking Neural Networks with Quantized Activation Functions
- Computer Science, Biology
- ArXiv
- 2020
- 2
- PDF
References
SHOWING 1-10 OF 35 REFERENCES
Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning
- Computer Science
- 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
- 341
- PDF
ECC: Platform-Independent Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model
- Computer Science, Mathematics
- 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
- 16
- PDF
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
- Computer Science
- 2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
- 716
- PDF
Channel Pruning for Accelerating Very Deep Neural Networks
- Computer Science
- 2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
- 899
- PDF
Efficient Neural Architecture Search via Parameter Sharing
- Computer Science, Mathematics
- ICML
- 2018
- 1,024
- PDF
Learning Efficient Convolutional Networks through Network Slimming
- Computer Science
- 2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
- 703
- Highly Influential
- PDF
Splitting Steepest Descent for Growing Neural Architectures
- Computer Science, Mathematics
- NeurIPS
- 2019
- 8
- PDF
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
- Computer Science, Mathematics
- ICLR
- 2019
- 608
- PDF
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
- Computer Science
- ICLR
- 2016
- 4,046
- PDF