• Corpus ID: 238226747

Learning the Markov Decision Process in the Sparse Gaussian Elimination

@article{Chen2021LearningTM,
  title={Learning the Markov Decision Process in the Sparse Gaussian Elimination},
  author={Yingshi Chen},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.14929}
}
  • Yingshi Chen
  • Published 30 September 2021
  • Computer Science, Mathematics
  • ArXiv
We propose a learning-based approach for the sparse Gaussian Elimination. There are many hard combinatorial optimization problems in modern sparse solver. These NP-hard problems could be handled in the framework of Markov Decision Process, especially the Q-Learning technique. We proposed some Q-Learning algorithms for the main modules of sparse solver: minimum degree ordering, task scheduling and adaptive pivoting. Finally, we recast the sparse solver into the framework of Q-Learning. Our study… 

Figures and Tables from this paper

Fast Block Linear System Solver Using Q-Learning Schduling for Unified Dynamic Power System Simulations
TLDR
A fast block direct solver for the unified dynamic simulations of power systems that uses a novel Q-learning based method for task scheduling and a learning based task-tree scheduling technique in the framework of Markov Decision Process.

References

SHOWING 1-10 OF 67 REFERENCES
Geometric deep reinforcement learning for dynamic DAG scheduling
TLDR
This paper proposes a reinforcement learning approach to solve a realistic scheduling problem, and applies it to an algorithm commonly executed in the high performance computing community, the CHOLESKY factorization and shows that this approach is competitive with state-of-the-art heuristics used in high performance Computing runtime systems.
Algorithms for sparse Gaussian elimination with partial pivoting
TLDR
The conclusion is that partial pivoting codes perform well and that they should be considered for sparse problems whenever pivoting for numerical stability is reqmred.
Replacing Pivoting in Distributed Gaussian Elimination with Randomized Techniques
TLDR
This work proposes replacing pivoting with recursive butterfly transforms (RBTs) and iterative refinement and shows that the proposed solver was able to outperform GEPP when distributed on GPU-accelerated nodes.
SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems
TLDR
The main algorithmic features in the software package SuperLU_DIST, a distributed-memory sparse direct solver for large sets of linear equations, are presented, with an innovative static pivoting strategy proposed earlier by the authors.
Solving Markov Decision Processes via Simulation
TLDR
This chapter presents an overview of simulation-based techniques useful for solving Markov decision processes (MDPs), and presents a step-by-step description of a selected group of algorithms for infinite-horizon discounted reward MDPs.
Elimination structures for unsymmetric sparse LU factors
The elimination tree is central to the study of Cholesky factorization of sparse symmetric positive definite matrices. In this paper, the elimination tree is generalized to a structure appropriate
Q-learning
TLDR
This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989), showing that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action- values are represented discretely.
Improved Symbolic and Numerical Factorization Algorithms for Unsymmetric Sparse Matrices
  • Anshul Gupta
  • Mathematics, Computer Science
    SIAM J. Matrix Anal. Appl.
  • 2002
TLDR
Two algorithms for the symbolic and numerical factorization phases in the direct solution of sparse unsymmetric systems of linear equations have been implemented in WSMP and have enabled WSMP to significantly outperform other similar solvers.
Performance of Greedy Ordering Heuristics for Sparse Cholesky Factorization
TLDR
Two new heuristics for ordering sparse matrices for Cholesky factorization are developed: modified minimum deficiency (MMDF) and modified multiple minimum degree (MMMD), the former uses a metric similar to deficiency while the latter uses a degree-like metric.
A Communication-Avoiding 3D LU Factorization Algorithm for Sparse Matrices
TLDR
A new algorithm to improve the strong scalability of right-looking sparse LU factorization on distributed memory systems using a three-dimensional MPI process grid, aggressively exploits elimination tree parallelism and trades off increased memory for reduced per-process communication.
...
1
2
3
4
5
...