• Corpus ID: 221554599

Solving the k-sparse Eigenvalue Problem with Reinforcement Learning

@article{Zhou2020SolvingTK,
  title={Solving the k-sparse Eigenvalue Problem with Reinforcement Learning},
  author={Li Zhou and Lihao Yan and Mark A. Caprio and Weiguo Gao and Chao Yang},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.04414}
}
We examine the possibility of using a reinforcement learning (RL) algorithm to solve large-scale eigenvalue problems in which the desired the eigenvector can be approximated by a sparse vector with at most $k$ nonzero elements, where $k$ is relatively small compare to the dimension of the matrix to be partially diagonalized. This type of problem arises in applications in which the desired eigenvector exhibits localization properties and in large-scale eigenvalue computations in which the amount… 
3 Citations
Improved Algorithms for Low Rank Approximation from Sparsity
TLDR
Although this algorithm runs in exponential time, as it must under standard complexity-theoretic assumptions, it is shown that there are polynomial time algorithms using poly(s, k, log n, ε) memory that output rank k approximations supported on an O(sk/ε)×O(sk /ε) submatrix.
Reinforcement Learning Configuration Interaction.
TLDR
This work explores the possibility of utilizing reinforcement learning approaches to solve the sCI problem by mapping the configuration interaction problem onto a sequential decision-making process, and learns on-the-fly which determinants to include and which to ignore, yielding a compressed wave function at near-FCI accuracy.
Reinforcement Learning Configuration Interaction
A reinforcement learning algorithm is developed for the selected configuration interaction problem. We explore how reinforcement learning can obtain compact wave functions at near full configuration

References

SHOWING 1-10 OF 44 REFERENCES
A greedy algorithm for computing eigenvalues of a symmetric matrix with localized eigenvectors
TLDR
A greedy algorithm for computing selected eigenpairs of a large sparse matrix H that can exploit localization features of the eigenvector to demonstrate the effectiveness of this approach with examples from nuclear quantum many‐ body calculations, many‐body localization studies of quantum spin chains and road network analysis.
Optimal Solutions for Sparse Principal Component Analysis
TLDR
A new semidefinite relaxation is formulated and a greedy algorithm is derived that computes a full set of good solutions for all target numbers of non zero coefficients, with total complexity O(n3), where n is the number of variables.
Generalized Power Method for Sparse Principal Component Analysis
TLDR
A new approach to sparse principal component analysis (sparse PCA) aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively is developed.
Topology-Aware Mappings for Large-Scale Eigenvalue Problems
TLDR
It is quantitatively shown that topology-aware mapping of computational tasks to physical processors on large-scale multi-core clusters may have a significant impact on efficiency.
A Scalable Matrix-Free Iterative Eigensolver for Studying Many-Body Localization
TLDR
The efficiency and effectiveness of the proposed algorithm is demonstrated by computing eigenstates in a massively parallel fashion, and analyzing their entanglement entropy to gain insight into the many-body localization (MBL) transition.
Fast reinforcement learning with generalized policy updates
TLDR
It is argued that complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel, and associating each task with a reward function can be seamlessly accommodated within the standard reinforcement-learning formalism.
Approximate reinforcement learning: An overview
TLDR
An overview of methods for approximate RL, starting from their dynamic programming roots and organizing them into three major classes: approximate value iteration, policy iteration, and policy search, which compares the different categories of methods and outlines possible ways to enhance the reviewed algorithms.
Reinforcement Learning: An Introduction
TLDR
This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Improving the scalability of a symmetric iterative eigensolver for multi‐core platforms
We describe an efficient and scalable symmetric iterative eigensolver developed for distributed memory multi‐core platforms. We achieve over 80% parallel efficiency by major reductions in
Q-learning
TLDR
This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989), showing that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action- values are represented discretely.
...
...