The ITensor Software Library for Tensor Network Calculations

@article{Fishman2022TheIS,
  title={The ITensor Software Library for Tensor Network Calculations},
  author={Matthew T. Fishman and Steven R. White and Edwin Miles Stoudenmire},
  journal={ArXiv},
  year={2022},
  volume={abs/2007.14822}
}
ITensor is a system for programming tensor network calculations with an interface modeled on tensor diagrams, allowing users to focus on the connectivity of a tensor network without manually bookkeeping tensor indices. The ITensor interface rules out common programming errors and enables rapid prototyping of algorithms. After discussing the philosophy behind the ITensor approach, we show examples of each part of the interface including Index objects, the ITensor product operator, tensor… 

A multisite decomposition of the tensor network path integrals

TLDR
It is demonstrated that the presence of dissipative environments can often dissipate the entanglement between the sites as measured by the bond dimension of the reduced density matrix product state.

A Practical Guide to the Numerical Implementation of Tensor Networks I: Contractions, Decompositions, and Gauge Freedom

  • G. Evenbly
  • Computer Science
    Frontiers in Applied Mathematics and Statistics
  • 2022
TLDR
An introduction to the contraction of tensor networks, to optimal tensor decompositions, and to the manipulation of gauge degrees of freedom in Tensor networks is presented.

Quantum-inspired event reconstruction with Tensor Networks: Matrix Product States

TLDR
This study presents the discrimination of top quark signal over QCD background processes using a Matrix Product State classifier and shows that entanglement entropy can be used to interpret what a network learns, which can beused to reduce the complexity of the network and feature space without loss of generality or performance.

Locally orderless tensor networks for classifying two- and three-dimensional medical images

TLDR
This work improves upon the matrix product state (MPS) tensor networks that can operate on one-dimensional vectors to be useful for working with 2D and 3D medical images.

ExaTN: Scalable GPU-Accelerated High-Performance Processing of General Tensor Networks at Exascale

We present ExaTN (Exascale Tensor Networks), a scalable GPU-accelerated C++ library which can express and process tensor networks on shared- as well as distributed-memory high-performance computing

Local tensor-network codes

TLDR
This work shows how to write some topological codes, including the surface code and colour code, as simple tensor-network codes and proves that this method is efficient in the case of stabilizer codes encoded via local log-depth circuits in one dimension and holographic codes.

A High-Performance Sparse Tensor Algebra Compiler in Multi-Level IR

TLDR
The results show that the performance of automatically generated kernels outperforms the state-of-the-art sparse tensor algebra compiler, with up to 20.92x, 6.39x, and 13.9x performance improvement, for parallel SpMV, SpMM, and TTM over TACO, respectively.

Patch-based Medical Image Segmentation using Matrix Product State Tensor Networks

TLDR
This work formulate image segmentation in a supervised setting with tensor networks to first lift the pixels in image patches to exponentially high-dimensional feature spaces and using a linear decision hyper-plane to classify the input pixels into foreground and background classes.

Patch-based medical image segmentation using Quantum Tensor Networks

TLDR
This work formulate image segmentation in a supervised setting with tensor networks by first lift the pixels in image patches to exponentially high dimensional feature spaces and using a linear decision hyper-plane to classify the input pixels into foreground and background classes.
...

References

SHOWING 1-10 OF 91 REFERENCES

Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy)

TLDR
This paper combines a compact review of basic TPS concepts with the introduction of a versatile tensor library for Python (TeNPy) and provides a practical guide on how to implement abelian symmetries to accelerate tensor operations.

Improving the efficiency of variational tensor network algorithms

TLDR
The usefulness of several results relating to the contraction of generic tensor networks is demonstrated for the optimization of a multiscale entanglement renormalization Ansatz for the ground state of a one-dimensional quantum system, where they are shown to substantially reduce the computation time.

Faster identification of optimal contraction sequences for tensor networks.

TLDR
A modified search algorithm with enhanced pruning is presented which exhibits a performance increase of several orders of magnitude while still guaranteeing identification of an optimal operation-minimizing contraction sequence for a single tensor network.

NCON: A tensor network contractor for MATLAB

TLDR
This article presents a MATLAB function ncon(), or "Network CONtractor", which accepts as its input a tensor network and a contraction sequence describing how this network may be reduced to a single tensor or number.

AutoHOOT: Automatic High-Order Optimization for Tensors

TLDR
This work introduces AutoHOOT, the first automatic differentiation framework targeting at high-order optimization for tensor computations, which contains a new explicit Jacobian / Hessian expression generation kernel whose outputs maintain the input tensors' granularity and are easy to optimize.

Tensor network decompositions in the presence of a global symmetry

TLDR
This work discusses how to incorporate a global symmetry, given by a compact, completely reducible group G, in tensor network decompositions and algorithms, by considering tensors that are invariant under the action of the group G.

Differentiable programming of isometric tensor networks

TLDR
By introducing several gradient-based optimization methods for the isometric tensor network and comparing with Evenbly–Vidal method, it is shown that auto-differentiation has a better performance for both stability and accuracy.

Hyper-optimized tensor network contraction

TLDR
This work implements new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks, and introduces a hyper-optimization approach, where both the method applied and its algorithmic parameters are tuned during the path finding.

Faster methods for contracting infinite two-dimensional tensor networks

We revisit the corner transfer matrix renormalization group (CTMRG) method of Nishino and Okunishi for contracting two-dimensional (2D) tensor networks and demonstrate that its performance can be

Differentiable Programming Tensor Networks

TLDR
This work presents essential techniques to differentiate through the tensor networks contractions, including stable AD for tensor decomposition and efficient backpropagation through fixed point iterations, and removes laborious human efforts in deriving and implementing analytical gradients for Tensor network programs.
...