CHAOS: a parallelization scheme for training convolutional neural networks on Intel Xeon Phi

@article{Viebke2017CHAOSAP,
  title={CHAOS: a parallelization scheme for training convolutional neural networks on Intel Xeon Phi},
  author={Andre Viebke and Suejb Memeti and S. Pllana and A. Abraham},
  journal={The Journal of Supercomputing},
  year={2017},
  volume={75},
  pages={197-227}
}
  • Andre Viebke, Suejb Memeti, +1 author A. Abraham
  • Published 2017
  • Computer Science
  • The Journal of Supercomputing
  • Deep learning is an important component of Big Data analytic tools and intelligent applications, such as self-driving cars, computer vision, speech recognition, or precision medicine. However, the training process is computationally intensive and often requires a large amount of time if performed sequentially. Modern parallel computing systems provide the capability to reduce the required training time of deep neural networks. In this paper, we present our parallelization scheme for training… CONTINUE READING
    DAPP: Accelerating Training of DNN
    Demystifying Parallel and Distributed Deep Learning
    142
    Bayesian Neural Networks at Scale: A Performance Analysis and Pruning Study

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 83 REFERENCES
    Training Large Scale Deep Neural Networks on the Intel Xeon Phi Many-Core Coprocessor
    24
    High Performance Convolutional Neural Networks for Document Processing
    305
    High-Performance Neural Networks for Visual Object Classification
    204