Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware

@article{Liu2020SparseED,
  title={Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware},
  author={S. Liu and D. Mocanu and Amarsagar Reddy Ramapuram Matavalam and Y. Pei and M. Pechenizkiy},
  journal={Neural Computing and Applications},
  year={2020},
  pages={1-16}
}
  • S. Liu, D. Mocanu, +2 authors M. Pechenizkiy
  • Published 2020
  • Computer Science, Mathematics
  • Neural Computing and Applications
  • Artificial neural networks (ANNs) have emerged as hot topics in the research community. Despite the success of ANNs, it is challenging to train and deploy modern ANNs on commodity hardware due to the ever-increasing model size and the unprecedented growth in the data volumes. Particularly for microarray data, the very high dimensionality and the small number of samples make it difficult for machine learning techniques to handle. Furthermore, specialized hardware such as graphics processing unit… CONTINUE READING
    11 Citations
    Learning Sparse Neural Networks for Better Generalization
    • S. Liu
    • Computer Science
    • IJCAI
    • 2020
    • PDF
    Exposing Hardware Building Blocks to Machine Learning Frameworks
    • PDF
    Activation function impact on Sparse Neural Networks
    • PDF
    Deep Learning on Computational-Resource-Limited Platforms: A Survey
    • 1
    • PDF
    Smart Anomaly Detection in Sensor Systems
    • PDF
    Smart Anomaly Detection in Sensor Systems: A Multi-Perspective Review.
    • 1

    References

    SHOWING 1-10 OF 88 REFERENCES
    Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science
    • 122
    • PDF
    Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
    • 53
    • PDF
    Exploring Sparsity in Recurrent Neural Networks
    • 156
    • PDF
    Learning both Weights and Connections for Efficient Neural Network
    • 2,771
    • PDF
    NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
    • 91
    • PDF
    Rigging the Lottery: Making All Tickets Winners
    • 44
    • PDF
    Deep Rewiring: Training very sparse deep networks
    • 75
    • PDF
    Sparse Networks from Scratch: Faster Training without Losing Performance
    • 59
    • PDF
    Soft Weight-Sharing for Neural Network Compression
    • 226
    • PDF