Tensor networks for unsupervised machine learning

@article{Liu2021TensorNF,
  title={Tensor networks for unsupervised machine learning},
  author={Jing Liu and Sujie Li and Jiang Zhang and Pan Zhang},
  journal={Physical review. E},
  year={2021},
  volume={107 1},
  pages={
          L012103
        }
}
Modeling the joint distribution of high-dimensional data is a central task in unsupervised machine learning. In recent years, many interests have been attracted to developing learning models based on tensor networks, which have the advantages of a principle understanding of the expressive power using entanglement properties, and as a bridge connecting classical computation and quantum computation. Despite the great potential, however, existing tensor network models for unsupervised machineโ€ฆย 

Generative modeling with projected entangled-pair states

Techniques from many-body physics have always played a major role in the development of generative machine learning, and can be traced back to the parallels between the respective problems one has to deal with in both fields.

Generalization and Overfitting in Matrix Product State Machine Learning Architectures

It is speculated that generalization properties of MPS depend on the properties of data: with one-dimensional data (for which the MPS ansatz is the most suitable) MPS is prone to over๏ฌtting, while with more complex data which cannot be parameterized by MPS exactly, over-tting may be much less signi๏ฌ‚cant.

Deep tensor networks with matrix product operators

Deep tensor networks are introduced, which are exponentially wide neural networks based on the tensor network representation of the weight matrices and random crop training improves the robustness of uniform Tensor network models to image size and aspect ratio changes.

Graphical calculus for Tensor Network Contractions

  • S. Raj
  • Computer Science, Physics
  • 2022
This dissertation investigates how effective the existing procedures are at enhancing tensor network contractions and proposes new strategies based on their observations, which are evaluated using a variety of circuits, including the Sycamore circuits used by Google to demonstrate quantum supremacy in 2019.

Permutation Search of Tensor Network Structures via Local Sampling

Theoretically, the counting and metric properties of search spaces of TN-PS are proved and a novel meta-heuristic algorithm is proposed, in which the searching is done by randomly sampling in a neighborhood established in the authors' theory, and then recurrently updating the neighborhood until convergence.

Grokking phase transitions in learning local rules with gradient descent

A tensor-network map is introduced that connects the proposed grokking setup with the standard (perceptron) statistical learning theory and it is shown thatGrokking is a consequence of the locality of the teacher model and the critical exponent and thegrokking time distributions are numerically determined.

A Practical Guide to the Numerical Implementation of Tensor Networks I: Contractions, Decompositions, and Gauge Freedom

  • G. Evenbly
  • Computer Science
    Frontiers in Applied Mathematics and Statistics
  • 2022
An introduction to the contraction of tensor networks, to optimal tensor decompositions, and to the manipulation of gauge degrees of freedom in Tensor networks is presented.

Control flow in active inference systems

It is shown here that when systems are described as executing active inference driven by the free-energy principle (and hence can be considered Bayesian prediction-error minimizers), their control flow systems can always be represented as tensor networks (TNs).

Deep tensor networks with matrix product operators

Deep tensor networks are introduced, which are exponentially wide neural networks based on the tensor network representation of the weight matrices and it is shown that the latter generalises well to different input sizes.

References

SHOWING 1-10 OF 48 REFERENCES

Unsupervised Generative Modeling Using Matrix Product States

This work proposes a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states, and enjoys efficient learning analogous to the density matrix renormalization group method.

Tree Tensor Networks for Generative Modeling

It is shown that the TTN is superior to MPSs for generative modeling in keeping the correlation of pixels in natural images, as well as giving better log-likelihood scores in standard data sets of handwritten digits.

Machine learning by unitary tensor network of hierarchical tree structure

This work trains two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multi-scale entanglement renormalization ansatz, and introduces mathematical connections among quantum many-body physics, quantum information theory, and machine learning.

Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines

It is found that the RBM with local sparse connection exhibit high learning efficiency, which supports the application of tensor network states in machine learning problems and estimates the classical mutual information of the standard MNIST datasets and the quantum Rรฉnyi entropy of corresponding Matrix Product States (MPS) representations.

Solving Statistical Mechanics using Variational Autoregressive Networks

This work proposes a general framework for solving statistical mechanics of systems with finite size using autoregressive neural networks, which computes variational free energy, estimates physical quantities such as entropy, magnetizations and correlations, and generates uncorrelated samples all at once.

Deep autoregressive models for the efficient variational simulation of many-body quantum systems

This work proposes a specialized neural- network architecture that supports efficient and exact sampling, completely circumventing the need for Markov-chain sampling, and demonstrates the ability to obtain accurate results on larger system sizes than those currently accessible to neural-network quantum states.

The Neural Autoregressive Distribution Estimator

A new approach for modeling the distribution of high-dimensional vectors of discrete variables inspired by the restricted Boltzmann machine, which outperforms other multivariate binary distribution estimators on several datasets and performs similarly to a large (but intractable) RBM.

On the quantitative analysis of deep belief networks

It is shown that Annealed Importance Sampling (AIS) can be used to efficiently estimate the partition function of an RBM, and a novel AIS scheme for comparing RBM's with different architectures is presented.

Solving statistical mechanics on sparse graphs with feedback-set variational autoregressive networks.

The method extracts a small feedback vertex set from the sparse graph, converts the sparse system to a much smaller system with many-body and dense interactions with an effective energy on every configuration of the FVS, then learns a variational distribution parametrized using neural networks to approximate the original Boltzmann distribution.

Solving the quantum many-body problem with artificial neural networks

A variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons and a reinforcement-learning scheme that is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems.