• Corpus ID: 11215251

Typesafe Abstractions for Tensor Operations

  title={Typesafe Abstractions for Tensor Operations},
  author={Tongfei Chen},
We propose a typesafe abstraction to tensors (i.e. multidimensional arrays) exploiting the type-level programming capabilities of Scala through heterogeneous lists (HList), and showcase typesafe abstractions of common tensor operations and various neural layers such as convolution or recurrent neural networks. This abstraction could lay the foundation of future typesafe deep learning frameworks that runs on Scala/JVM. CCS Concepts • Software and its engineering → Software libraries and… 

Figures from this paper

Static Analysis of Python Programs using Abstract Interpretation: An Application to Tensor Shape Analysis
A subset of Python is specified with emphasis on tensor operations, based on The Python Language Reference, and an Abstract Interpreter is defined and its implementation presented, which shows how each part of the Abstract InterPreter was built: the Abstract Domains defined and the abstract semantics.
ShapeFlow: Dynamic Shape Interpreter for TensorFlow
ShapeFlow detects shape incompatibility errors highly accurately -- with no false positives and a single false negative -- and highly efficiently -- with an average speed-up of 499X and 24X for the first and second baseline, respectively.
Named Tensor Notation
We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers from the burden of keeping track of the order of axes and the purpose of each. It also
Empowering big data analytics with polystore and strongly typed functional queries
An implementation in Scala using Spark is developed, providing users with a type-safe and schema inference mechanism that guarantees the technical and functional correctness of composed expressions on tensors at compile time and allows to outperform Spark query optimizer using bind join.
Recognizing heterogeneous sequences by rational type expression
  • Jim E. Newton, D. Verna
  • Computer Science
    Proceedings of the 3rd ACM SIGPLAN International Workshop on Meta-Programming Techniques and Reflection
  • 2018
The technique employs sequence recognition functions, generated at compile time, and evaluated at run-time, which extends the Common Lisp type system, exploiting the theory of rational languages, Binary Decision Diagrams, and the Turing complete macro facility of Common Lisp.


TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields.
DyNet: The Dynamic Neural Network Toolkit
DyNet is a toolkit for implementing neural network models based on dynamic declaration of network structure that has an optimized C++ backend and lightweight graph representation and is designed to allow users to implement their models in a way that is idiomatic in their preferred programming language.
Theano: A Python framework for fast computation of mathematical expressions
The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed.
Type inference for array programming with dimensioned vector spaces
Experiments show that the explicit support for linear algebra increases type safety, and that it leads to a more functional and index-free style of programming.
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
The Tree-LSTM is introduced, a generalization of LSTMs to tree-structured network topologies that outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences and sentiment classification.
Statically typed linear algebra in Haskell
The idea of exposing dimensions to the type system "strongly typed linear algebra" is called and a prototype implementation is written in Haskell, which is based on Alberto Ruiz's GSLHaskell and which uses techniques from Kiselyov and Shan's "Implicit Configurations" paper.
Experience report: type-checking polymorphic units for astrophysics research in Haskell
This work demonstrates the utility of units by writing an astrophysics research paper, free of unit concerns because every quantity expression in the paper is rigorously type-checked.
A fold for all seasons
A normalization algorithm which automatically calculates improvements to programs expressed in a language based upon a generic promotion theorem rather than using an analysis phase to search for implicit structure, has important applications in program transformation, optimization, and theorem proving.
The NumPy Array: A Structure for Efficient Numerical Computation
This effort shows, NumPy performance can be improved through three techniques: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts.
Long Short-Term Memory
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.