• Corpus ID: 235624270

Tensor networks for unsupervised machine learning

@article{Liu2021TensorNF,
  title={Tensor networks for unsupervised machine learning},
  author={Jing Liu and Sujie Li and Jiang Zhang and Pan Zhang},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.12974}
}
Jing Liu,1 Sujie Li,2, 3 Jiang Zhang,1 and Pan Zhang2, 4, 5, ∗ 1School of Systems Science, Beijing Normal University 2CAS Key Laboratory for Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China 3School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China 4School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China 5International Centre for… 

Permutation Search of Tensor Network Structures via Local Sampling

Theoretically, the counting and metric properties of search spaces of TN-PS are proved and a novel meta-heuristic algorithm is proposed, in which the searching is done by randomly sampling in a neighborhood established in the authors' theory, and then recurrently updating the neighborhood until convergence.

Generalization and Overfitting in Matrix Product State Machine Learning Architectures

It is speculated that generalization properties of MPS depend on the properties of data: with one-dimensional data (for which the MPS ansatz is the most suitable) MPS is prone to overfitting, while with more complex data which cannot be parameterized by MPS exactly, over-tting may be much less signiflcant.

A Practical Guide to the Numerical Implementation of Tensor Networks I: Contractions, Decompositions, and Gauge Freedom

  • G. Evenbly
  • Computer Science
    Frontiers in Applied Mathematics and Statistics
  • 2022
An introduction to the contraction of tensor networks, to optimal tensor decompositions, and to the manipulation of gauge degrees of freedom in Tensor networks is presented.

Graphical calculus for Tensor Network Contractions

  • S. Raj
  • Computer Science, Physics
  • 2022
This dissertation investigates how effective the existing procedures are at enhancing tensor network contractions and proposes new strategies based on their observations, which are evaluated using a variety of circuits, including the Sycamore circuits used by Google to demonstrate quantum supremacy in 2019.

Generative modeling with projected entangled-pair states

Techniques from many-body physics have always played a major role in the development of generative machine learning, and can be traced back to the parallels between the respective problems one has to deal with in both fields.

Grokking phase transitions in learning local rules with gradient descent

A tensor-network map is introduced that connects the proposed grokking setup with the standard (perceptron) statistical learning theory and it is shown thatGrokking is a consequence of the locality of the teacher model and the critical exponent and thegrokking time distributions are numerically determined.

Deep tensor networks with matrix product operators

Deep tensor networks are introduced, which are exponentially wide neural networks based on the tensor network representation of the weight matrices and random crop training improves the robustness of uniform Tensor network models to image size and aspect ratio changes.

References

SHOWING 1-10 OF 42 REFERENCES

Unsupervised Generative Modeling Using Matrix Product States

This work proposes a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states, and enjoys efficient learning analogous to the density matrix renormalization group method.

Expressive power of tensor-network factorizations for probabilistic modeling, with applications from hidden Markov models to quantum machine learning

This work provides a rigorous analysis of the expressive power of various tensor-network factorizations of discrete multivariate probability distributions, and introduces locally purified states (LPS), a new factorization inspired by techniques for the simulation of quantum systems with provably better expressive power than all other representations considered.

Tree Tensor Networks for Generative Modeling

It is shown that the TTN is superior to MPSs for generative modeling in keeping the correlation of pixels in natural images, as well as giving better log-likelihood scores in standard data sets of handwritten digits.

Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines

It is found that the RBM with local sparse connection exhibit high learning efficiency, which supports the application of tensor network states in machine learning problems and estimates the classical mutual information of the standard MNIST datasets and the quantum Rényi entropy of corresponding Matrix Product States (MPS) representations.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Solving Statistical Mechanics using Variational Autoregressive Networks

This work proposes a general framework for solving statistical mechanics of systems with finite size using autoregressive neural networks, which computes variational free energy, estimates physical quantities such as entropy, magnetizations and correlations, and generates uncorrelated samples all at once.

MADE: Masked Autoencoder for Distribution Estimation

This work introduces a simple modification for autoencoder neural networks that yields powerful generative models and proves that this approach is competitive with state-of-the-art tractable distribution estimators.

Solving statistical mechanics on sparse graphs with feedback-set variational autoregressive networks.

The method extracts a small feedback vertex set from the sparse graph, converts the sparse system to a much smaller system with many-body and dense interactions with an effective energy on every configuration of the FVS, then learns a variational distribution parametrized using neural networks to approximate the original Boltzmann distribution.

Reducing the Dimensionality of Data with Neural Networks

This work describes an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions

A focus is on the Tucker and tensor train TT decompositions and their extensions, and on demonstrating the ability of tensor network to provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph.