ART: adaptive residual-time restarting for Krylov subspace matrix exponential evaluations

@article{Botchev2020ARTAR,
  title={ART: adaptive residual-time restarting for Krylov subspace matrix exponential evaluations},
  author={Mike A. Botchev and Leonid A. Knizhnerman},
  journal={ArXiv},
  year={2020},
  volume={abs/1812.10165}
}

Figures and Tables from this paper

An accurate restarting for shift-and-invert Krylov subspaces computing matrix exponential actions of nonsymmetric matrices
TLDR
An accurate residual--time (AccuRT) restarting for computing matrix exponential actions of nonsymmetric matrices by the shift-and-invert (SAI) Krylov subspace method is proposed and improved accuracy and efficiency are demonstrated.
A residual concept for Krylov subspace evaluation of the $\varphi$ matrix function
TLDR
Numerical tests demonstrate efficiency of the proposed algorithm for solving large scale evolution problems resulting from discretized in space time-dependent PDEs, in particular, diffusion and convection–diffusion problems.
A residual concept for Krylov subspace evaluation of the 𝓁 matrix function
TLDR
An efficient Krylov subspace algorithm for computing actions of the $\varphi$ matrix function for large matrices, based on a reliable residual based stopping criterion and a new efficient restarting procedure is proposed.
Coarse grid corrections in Krylov subspace evaluations of the matrix exponential
TLDR
A coarse grid correction (CGC) approach is proposed to enhance the efficiency of the matrix exponential and φ matrix function evaluations for iterative methods computing the matrix-vector products with these functions.
A conjugate-gradient-type rational Krylov subspace method for ill-posed problems
TLDR
It is shown that using the same idea for the shift-and-invert rational Krylov subspace yields an order-optimal regularisation scheme.
N A ] 8 A ug 2 01 9 A conjugate-gradient-type rational Krylov subspace method for ill-posed problems
Conjugated gradients on the normal equation (CGNE) is a popular method to regularise linear inverse problems. The idea of the method can be summarised as minimising the residuum over a suitable
A study of defect-based error estimates for the Krylov approximation of φ-functions
TLDR
A posteriori error bounds and estimates, based on the notion of the defect (residual) of the Krylov approximation, are considered, including a new error bound which favorably compares to existing error bounds in specific cases.
Exponential time integrators for unsteady advection-diffusion problems on refined meshes
  • M. Botchev
  • Computer Science
    Lecture Notes in Computational Science and Engineering
  • 2021
TLDR
It is shown that exponential time integrators can be an efficient, yet conceptually simple, option in this case, and includes the two-stage Rosenbrock method ROS2 which has been a popular alternative to splitting methods for solving advection-diffusion problems.
Fast Multiscale Diffusion on Graphs
TLDR
This work tightens a bound on the approximation error of truncated Chebyshev polynomial approximations of the exponential, hence significantly improving a priori estimates of thePolynomial order for a prescribed error and exploiting properties of these approximation to factorize the computation of the action of the diffusion operator over multiple scales, thus reducing drastically its computational cost.
Diffusion-Wasserstein Distances for Attributed Graphs
This thesis is about the definition and study of the Diffusion-Wasserstein distances between attributed graphs. An attributed graph is a collection of points with individual descriptions (features)
...
1
2
...

References

SHOWING 1-10 OF 49 REFERENCES
Deflated Restarting for Matrix Functions
We investigate an acceleration technique for restarted Krylov subspace methods for computing the action of a function of a large sparse matrix on a vector. Its effect is to ultimately deflate a
A Restarted Krylov Subspace Method for the Evaluation of Matrix Functions
TLDR
The Arnoldi algorithm for approximating a function of a matrix times a vector can be restarted in a manner analogous to restarted Krylov subspace methods for solving linear systems of equations and inherits the superlinear convergence property of its unrestarted counterpart for entire functions.
Residual, Restarting, and Richardson Iteration for the Matrix Exponential
TLDR
It is shown how the residual can be computed efficiently within several iterative methods for the matrix exponential, and how this completely resolves the question of reliable stopping criteria for these methods.
Analysis of some Krylov subspace approximations to the matrix exponential operator
  • Y. Saad
  • Computer Science, Mathematics
  • 1992
In this note a theoretical analysis of some Krylov subspace approximations to the matrix exponential operation $\exp (A)v$ is presented, and a priori and a posteriors error estimates are established.
Efficient and Stable Arnoldi Restarts for Matrix Functions Based on Quadrature
TLDR
An integral representation for the error of the iterates in the Arnoldi method is utilized which allows for an efficient quadrature-based restarting algorithm suitable for a large class of functions, including the so-called Stieltjes functions and the exponential function.
On Restart and Error Estimation for Krylov Approximation of w=f(A)v
TLDR
This paper shows how to apply restarts in the general case of approximating w = f (A)v, a much more efficient approximation of the exponential operator than the standard Krylov algorithm, and it is especially useful in the case of functions which cannot be factored into a product of functions.
Using Nonorthogonal Lanczos Vectors in the Computation of Matrix Functions
TLDR
Although the vectors produced in finite precision arithmetic are not orthogonal, it is shown why they can still be used effectively for these purposes, including solving linear systems and computing the matrix exponential.
On Krylov Subspace Approximations to the Matrix Exponential Operator
TLDR
A new class of time integration methods for large systems of nonlinear differential equations which use Krylov approximations to the exponential function of the Jacobian instead of solving linear or nonlinear systems of equations in every time step is proposed.
Preconditioning Lanczos Approximations to the Matrix Exponential
TLDR
It is argued that for these applications the convergence behavior of the Lanczos method can be unsatisfactory and a modified method is proposed that resolves this by a simple preconditioned transformation at the cost of an inner-outer iteration.
...
1
2
3
4
5
...