Optimising AI Training Deployments using Graph Compilers and Containers

@article{Mujkanovic2020OptimisingAT,
  title={Optimising AI Training Deployments using Graph Compilers and Containers},
  author={Nina Mujkanovic and K. Sivalingam and A. Lazzaro},
  journal={2020 IEEE High Performance Extreme Computing Conference (HPEC)},
  year={2020},
  pages={1-8}
}
Artificial Intelligence (AI) applications based on Deep Neural Networks (DNN) or Deep Learning (DL) have become popular due to their success in solving problems like image analysis and speech recognition. Training a DNN is computationally intensive and High Performance Computing (HPC) has been a key driver in AI growth. Virtualisation and container technology have led to the convergence of cloud and HPC infrastructure. These infrastructures with diverse hardware increase the complexity of… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 46 REFERENCES
PyTorch: An Imperative Style, High-Performance Deep Learning Library
ImageNet classification with deep convolutional neural networks
Glow: Graph Lowering Compiler Techniques for Neural Networks
Xla: Tensorflow, compiled
  • TensorFlow Dev Summit, 2017.
  • 2017
Deep Residual Learning for Image Recognition
MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
  • T. Chen, Mu Li, +7 authors Zheng Zhang
  • Computer Science
  • ArXiv
  • 2015
MNIST handwritten digit database
  • 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist/
  • 2010
ImageNet: A large-scale hierarchical image database
ONNC: A Compilation Framework Connecting ONNX to Proprietary Deep Learning Accelerators
  • Wei-Fen Lin, Der-Yu Tsai, +4 authors Luis Hsu
  • Computer Science
  • 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)
  • 2019
...
1
2
3
4
5
...