• Corpus ID: 211096562

A Tensor Network Approach to Finite Markov Decision Processes

@article{Gillman2020ATN,
  title={A Tensor Network Approach to Finite Markov Decision Processes},
  author={Edward Gillman and Dominic C Rose and Juan P. Garrahan},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.05185}
}
Tensor network (TN) techniques - often used in the context of quantum many-body physics - have shown promise as a tool for tackling machine learning (ML) problems. The application of TNs to ML, however, has mostly focused on supervised and unsupervised learning. Yet, with their direct connection to hidden Markov chains, TNs are also naturally suited to Markov decision processes (MDPs) which provide the foundation for reinforcement learning (RL). Here we introduce a general TN formulation of… 

Figures from this paper

Quantum Tensor Networks for Variational Reinforcement Learning

A novel quantum tensor network approach for variational reinforcement learning that approximates the policy with matrix product state (MPS) and alleviating the curse of dimensionality for searching in a huge state-action space is proposed.

A reinforcement learning approach to rare trajectory sampling

A general approach to adaptively construct a dynamics that efficiently samples atypical events is presented, exploiting the methods of reinforcement learning (RL), which refers to the set of machine learning techniques aimed at finding the optimal behaviour to maximise a reward associated with the dynamics.

References

SHOWING 1-10 OF 44 REFERENCES

From Probabilistic Graphical Models to Generalized Tensor Networks for Supervised Learning

This work explores the connection between tensor networks and probabilistic graphical models, and shows that it motivates the definition of generalized Tensor networks where information from a tensor can be copied and reused in other parts of the network.

Expressive power of tensor-network factorizations for probabilistic modeling, with applications from hidden Markov models to quantum machine learning

This work provides a rigorous analysis of the expressive power of various tensor-network factorizations of discrete multivariate probability distributions, and introduces locally purified states (LPS), a new factorization inspired by techniques for the simulation of quantum systems with provably better expressive power than all other representations considered.

Unsupervised Generative Modeling Using Matrix Product States

This work proposes a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states, and enjoys efficient learning analogous to the density matrix renormalization group method.

Compressing deep neural networks by matrix product operators

This work greatly simplifies the representations in deep learning, and opens a possible route toward establishing a framework of modern neural networks which might be simpler and cheaper, but more efficient.

Categorical Tensor Network States

This work presents a new and general method to factor an n-body quantum state into a tensor network of clearly defined building blocks and uses the solution to expose a previously unknown and large class of quantum states which can be sampled efficiently and exactly.

Neural-Network Approach to Dissipative Quantum Many-Body Dynamics.

This work represents the mixed many-body quantum states with neural networks in the form of restricted Boltzmann machines and derive a variational Monte Carlo algorithm for their time evolution and stationary states based on machine-learning techniques.

Quantum Entanglement in Deep Learning Architectures.

The results show that contemporary deep learning architectures, in the form of deep convolutional and recurrent networks, can efficiently represent highly entangled quantum systems and can support volume-law entanglement scaling, polynomially more efficiently than presently employed RBMs.

Learning Thermodynamics with Boltzmann Machines

A Boltzmann machine is developed that is capable of modeling thermodynamic observables for physical systems in thermal equilibrium and can faithfully reproduce the observables of the physical system.

Supervised Learning with Tensor Networks

It is demonstrated how algorithms for optimizing tensor networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize non-linear kernel learning models.

Learning relevant features of data with multi-scale tensor networks

Inspired by coarse-graining approaches used in physics, it is shown how similar algorithms can be adapted for data based on layered tree tensor networks and scale linearly with both the dimension of the input and the training set size.