Corpus ID: 2979879

A Tour of TensorFlow

@article{Goldsborough2016ATO,
  title={A Tour of TensorFlow},
  author={Peter Goldsborough},
  journal={ArXiv},
  year={2016},
  volume={abs/1610.01178}
}
Deep learning is a branch of artificial intelligence employing deep neural network architectures that has significantly advanced the state-of-the-art in computer vision, speech recognition, natural language processing and other domains. [...] Key Method We discuss its basic computational paradigms and distributed execution model, its programming interface as well as accompanying visualization toolkits. We then compare TensorFlow to alternative libraries such as Theano, Torch or Caffe on a qualitative as well as…Expand
Evaluating Deep Learning Paradigms with TensorFlow and Keras for Software Effort Estimation
Deep learning is an arm of Artificial Intelligence that uses deep neural networks to achieve artificial intelligence. It has made its mark in computer vision, speech recognition, language processing,Expand
Performance Analysis of Just-in-Time Compilation for Training TensorFlow Multi-Layer Perceptrons
The TensorFlow system [1] has been developed to provide a general, efficient and scalable framework for writing Machine Learning (ML) applications. With the rapid advancement and popularity of ML,Expand
Benchmarking TensorFlow on a personal computer not specialised for machine learning
Many recent advancement of modern technologies can be attributed to the rapid growth of the machine learning field and especially deep learning. A big challenge for deep learning is that the learningExpand
Performance Analysis of Deep Learning Libraries: TensorFlow and PyTorch
TLDR
Evaluating and comparing these two libraries for Deep Neural Network: TensorFlow and PyTorch shows thatPyTorch library presented a better performance, even though the Tensor Flow libraryPresented a greater GPU utilization rate. Expand
A detailed comparative study of open source deep learning frameworks
TLDR
This work provides a qualitative and quantitative comparison among three of the most popular and most comprehensive DL frameworks (namely Google's TensorFlow, University of Montreal's Theano and Microsoft's CNTK) to help end users make an informed decision about the best DL framework that suits their needs and resources. Expand
A comparative study of open source deep learning frameworks
TLDR
The purpose of this work is to provide a qualitative and quantitative comparison among three of the most popular and most comprehensive DL frameworks (namely Google's TensorFlow, University of Montreal's Theano, and Microsoft's CNTK). Expand
VTensor: Using Virtual Tensors to Build a Layout-Oblivious AI Programming Framework
TLDR
VTensor is proposed, a novel programming model for developing neural network operators that can effectively decouple the tensor layout from the framework and reduce code size by 50.85% on average. Expand
Characterizing Deep Learning Training Workloads on Alibaba-PAI
  • M. Wang, Chen Meng, +4 authors Y. Jia
  • Computer Science
  • 2019 IEEE International Symposium on Workload Characterization (IISWC)
  • 2019
TLDR
An analytical framework is established to investigate detailed execution time breakdown of various workloads using different training architectures, to identify performance bottleneck, and shows that weight/gradient communication during training takes almost 62% of the total execution time among all the workloads on average. Expand
A Comparison of the State-of-the-Art Deep Learning Platforms: An Experimental Study
TLDR
A qualitative and quantitative comparison of the state-of-the-art deep learning platforms is proposed in this study in order to shed light on which platform should be utilized for the implementations of deep neural networks. Expand
Benchmarking open source deep learning frameworks
TLDR
The purpose of this work is to provide a qualitative and quantitative comparison among three such frameworks: TensorFlow, Theano and CNTK and find out that C NTK's implementations are superior to the other ones under consideration. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TLDR
The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields. Expand
Comparative Study of Deep Learning Software Frameworks
TLDR
A comparative study of five deep learning frameworks, namely Caffe, Neon, TensorFlow, Theano, and Torch, on three aspects: extensibility, hardware utilization, and speed finds that Theano and Torch are the most easily extensible frameworks. Expand
Caffe: Convolutional Architecture for Fast Feature Embedding
TLDR
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Expand
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual RecognitionExpand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Theano: A Python framework for fast computation of mathematical expressions
TLDR
The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed. Expand
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieveExpand
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets. Expand
Scikit-learn: Machine Learning in Python
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringingExpand
MLlib: Machine Learning in Apache Spark
TLDR
MLlib is presented, Spark's open-source distributed machine learning library that provides efficient functionality for a wide range of learning settings and includes several underlying statistical, optimization, and linear algebra primitives. Expand
...
1
2
3
4
5
...