• Corpus ID: 229924027

TensorX: Extensible API for Neural Network Model Design and Deployment

  title={TensorX: Extensible API for Neural Network Model Design and Deployment},
  author={Davide Nunes and Luis Antunes},
TensorX is a Python library for prototyping, design, and deployment of complex neural network models in TensorFlow. A special emphasis is put on ease of use, performance, and API consistency. It aims to make available high-level components like neural network layers that are, in effect, stateful functions, easy to compose and reuse. Its architecture allows for the expression of patterns commonly found when building neural network models either on research or industrial settings. Borrowing ideas… 

Figures from this paper



fastai: A Layered API for Deep Learning

fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides

TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields.

TFX: A TensorFlow-Based Production-Scale Machine Learning Platform

TensorFlow Extended (TFX) is presented, a TensorFlow-based general-purpose machine learning platform implemented at Google that was able to standardize the components, simplify the platform configuration, and reduce the time to production from the order of months to weeks, while providing platform stability that minimizes disruptions.

Chainer: A Deep Learning Framework for Accelerating the Research Cycle

The Chainer framework is introduced, which intends to provide a flexible, intuitive, and high performance means of implementing the full range of deep learning models needed by researchers and practitioners.

PyTorch: An Imperative Style, High-Performance Deep Learning Library

This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.

MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems

The API design and the system implementation of MXNet are described, and it is explained how embedding of both symbolic expression and tensor operation is handled in a unified fashion.

DyNet: The Dynamic Neural Network Toolkit

DyNet is a toolkit for implementing neural network models based on dynamic declaration of network structure that has an optimized C++ backend and lightweight graph representation and is designed to allow users to implement their models in a way that is idiomatic in their preferred programming language.

On-the-fly Operation Batching in Dynamic Computation Graphs

This paper presents an algorithm, and its implementation in the DyNet toolkit, for automatically batching operations, and obtains throughput similar to that obtained with manual batches, as well as comparable speedups over single-instance learning on architectures that are impractical to batch manually.

In-datacenter performance analysis of a tensor processing unit

  • N. JouppiC. Young D. Yoon
  • Computer Science
    2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA)
  • 2017
This paper evaluates a custom ASIC-called a Tensor Processing Unit (TPU)-deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN) and compares it to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the samedatacenters.

Array programming with NumPy

How a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data is reviewed.