Corpus ID: 203902590

Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent

@article{Wang2019EnergyAwareNA,
  title={Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent},
  author={Dilin Wang and Meng Li and L. Wu and Vikas Chandra and Qiang Liu},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.03103}
}
  • Dilin Wang, Meng Li, +2 authors Qiang Liu
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • Designing energy-efficient networks is of critical importance for enabling state-of-the-art deep learning in mobile and edge settings where the computation and energy budgets are highly limited. Recently, Liu et al. (2019) framed the search of efficient neural architectures into a continuous splitting process: it iteratively splits existing neurons into multiple off-springs to achieve progressive loss minimization, thus finding novel architectures by gradually growing the neural network… CONTINUE READING
    3 Citations

    References

    SHOWING 1-10 OF 35 REFERENCES
    Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning
    • 341
    • PDF
    ECC: Platform-Independent Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model
    • 16
    • PDF
    ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
    • 716
    • PDF
    Channel Pruning for Accelerating Very Deep Neural Networks
    • Yihui He, X. Zhang, Jian Sun
    • Computer Science
    • 2017 IEEE International Conference on Computer Vision (ICCV)
    • 2017
    • 899
    • PDF
    Efficient Neural Architecture Search via Parameter Sharing
    • 1,024
    • PDF
    Collaborative Channel Pruning for Deep Networks
    • 31
    • PDF
    Learning Efficient Convolutional Networks through Network Slimming
    • 703
    • Highly Influential
    • PDF
    Splitting Steepest Descent for Growing Neural Architectures
    • 8
    • PDF
    ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
    • 608
    • PDF
    Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
    • 4,046
    • PDF