Corpus ID: 198229534

Exploiting variable precision in GMRES

@article{Gratton2019ExploitingVP,
  title={Exploiting variable precision in GMRES},
  author={Serge Gratton and Ehouarn Simon and David Titley-P{\'e}loquin and Philippe L. Toint},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.10550}
}
We describe how variable precision floating point arithmetic can be used in the iterative solver GMRES. We show how the precision of the inner products carried out in the algorithm can be reduced as the iterations proceed, without affecting the convergence rate or final accuracy achieved by the iterates. Our analysis explicitly takes into account the resulting loss of orthogonality in the Arnoldi vectors. We also show how inexact matrix-vector products can be incorporated into this setting. 
Accelerating Restarted GMRES With Mixed Precision Arithmetic
TLDR
The generalized minimum residual method (GMRES) is a commonly used iterative Krylov solver for sparse, non-symmetric systems of linear equations and theoretical results linking the convergence of finite precision GMRES with classical Gram-Schmidt with reorthogonalization and its infinite precision counterpart are provided. Expand
Improving the Performance of the GMRES Method using Mixed-Precision Techniques
TLDR
It is found that GMRES only needs double precision in computing the residual and updating the approximate solution to achieve double-precision accuracy, although it must restart after each improvement of single- Precision accuracy. Expand
A survey of numerical linear algebra methods utilizing mixed-precision arithmetic
TLDR
This work provides a comprehensive survey of mixed-precision numerical linear algebra routines, including the underlying concepts, theoretical background, and experimental results for both dense and sparse linear algebra problems. Expand
A Study of Mixed Precision Strategies for GMRES on GPUs
TLDR
This work presents strategies to determine when mixed precision GMRES will be effective and to choose parameters for a mixed precision iterative refinement solver to achieve better performance, and demonstrates the promise of mixed precision approaches. Expand
A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic
TLDR
How mixed- and multiprecision technology can help improving the performance of these methods and present highlights of application significantly outperforming the traditional fixed precision methods are focused on. Expand
Experimental Evaluation of Multiprecision Strategies for GMRES on GPUs
TLDR
This work presents strategies to determine when multiprecision GMRES will be effective and to choose parameters for a multiprecison iterative refinement solver to achieve better performance, and uses an implementation that is based on the Trilinos library and employs Kokkos Kernels for performance portability of linear algebra kernels. Expand
GMRES algorithms over 35 years
  • Qinmeng Zou
  • Computer Science, Mathematics
  • ArXiv
  • 2021
TLDR
This paper is about GMRES algorithms for the solution of nonsingular linear systems and focuses on acceleration strategies and parallel algorithms that are useful for solving challenging systems. Expand
MIXED PRECISION LOW RANK APPROXIMATIONS AND THEIR APPLICATION TO BLOCK LOW RANK LU FACTORIZATION∗
We introduce a novel approach to exploit mixed precision arithmetic for low rank approximations. Our approach is based on the observation that singular vectors associated with small singular valuesExpand
Mixed-precision explicit stabilized Runge-Kutta methods for single- and multi-scale differential equations
  • M. Croci, Giacomo Rosilho de Souza
  • Computer Science, Mathematics
  • ArXiv
  • 2021
TLDR
This work designs mixedprecision Runge–Kutta–Chebyshev (RKC) methods, where high precision is used for accuracy, and low precision for stability, and proves that while these methods are essentially as cheap as their fully low-precision equivalent, they retain the convergence order of their high- Precision counterpart. Expand
Compressed Basis GMRES on High Performance GPUs
TLDR
A new communication-reduction strategy for the (Krylov) GMRES solver that advocates for decoupling the storage format of the orthogonal basis from the arithmetic precision that is employed during the operations with that basis. Expand
...
1
2
...

References

SHOWING 1-10 OF 29 REFERENCES
Investigating half precision arithmetic to accelerate dense linear system solvers
TLDR
This work shows for a first time how the use of FP16 arithmetic can significantly accelerate, as well as make more energy efficient, FP32 or FP64-precision Ax = b solvers. Expand
Harnessing GPU Tensor Cores for Fast FP16 Arithmetic to Speed up Mixed-Precision Iterative Refinement Solvers
TLDR
This investigation presents an investigation showing that other high-performance computing (HPC) applications can also harness this power of floating-point arithmetic, and shows how using half-precision Tensor Cores (FP16-TC) for the arithmetic can provide up to 4× speedup. Expand
Minimizing convex quadratic with variable precision Krylov methods
TLDR
Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products and have significant potential in the steadily more important context of multi-precision computations. Expand
Simulating Low Precision Floating-Point Arithmetic
The half-precision (fp16) floating-point format, defined in the 2008 revision of the IEEE standard for floating-point arithmetic, and a more recently proposed half-precision format bfloat16, are in...
Accelerating the Solution of Linear Systems by Iterative Refinement in Three Precisions
TLDR
The results suggest that on architectures for which half precision is efficiently implemented it will be possible to solve certain linear systems up to twice as fast and to greater accuracy, as well as recommending a standard solver that uses LU factorization in single precision. Expand
Inexact Matrix-Vector Products in Krylov Methods for Solving Linear Systems: A Relaxation Strategy
TLDR
This paper experimentally shows that Krylov methods for solving linear systems can still perform very well in the presence of carefully monitored inexact matrix-vector products. Expand
Some observations on weighted GMRES
TLDR
It is found that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used. Expand
Convergence in Backward Error of Relaxed GMRES
This work is the follow-up of the experimental study presented in [A. Bouras and V. Fraysse´, SIAM J. Matrix Anal. Appl., 26 (2005), pp. 660-678]. It is based on and extends some theoretical resultsExpand
Inexact Krylov Subspace Methods for Linear Systems
TLDR
It is argued that the sensitivity towards perturbations is mainly determined by the underlying way the Krylov subspace is constructed and does not depend on the optimality properties of the particular method. Expand
Numerical behaviour of the modified gram-schmidt GMRES implementation
In [6] the Generalized Minimal Residual Method (GMRES) which constructs the Arnoldi basis and then solves the transformed least squares problem was studied. It was proved that GMRES with theExpand
...
1
2
3
...