Randomized algorithms for generalized singular value decomposition with application to sensitivity analysis

  title={Randomized algorithms for generalized singular value decomposition with application to sensitivity analysis},
  author={Arvind K. Saibaba and Joseph L. Hart and Bart G. van Bloemen Waanders},
  journal={Numerical Linear Algebra with Applications},
The generalized singular value decomposition (GSVD) is a valuable tool that has many applications in computational science. However, computing the GSVD for large‐scale problems is challenging. Motivated by applications in hyper‐differential sensitivity analysis (HDSA), we propose new randomized algorithms for computing the GSVD which use randomized subspace iteration and weighted QR factorization. Detailed error analysis is given which provides insight into the accuracy of the algorithms and… 

Randomized GCUR decompositions

An efficient randomized algorithm for computing a generalized CUR decomposition, which provides low-rank approximations of both matrices simultaneously in terms of some of their rows and columns and provides advantages over the standard CUR approximation for some applications.

Hyper-differential sensitivity analysis with respect to model discrepancy: mathematics and computation

A general representation of the discrepancy is introduced and a proposed framework is presented which combines the PDE discretization, post-optimality sensitivity operator, adjoint-based derivatives, and a randomized generalized singular value decomposition to enable scalable computation.

A Higher-Order Generalized Singular Value Decomposition for Rank Deficient Matrices

A modification of the HO-GSVD is proposed that allows its application to datasets with with mi < n or rank(Ai) < n, such as are encountered in bioinformatics, neuroscience, control theory or classification problems.

Enabling hyper-differential sensitivity analysis for ill-posed inverse problems

. Inverse problems constrained by partial differential equations (PDEs) play a critical role in model development and calibration. In many applications, there are multiple uncertain parameters in a

Hyper-differential sensitivity analysis with respect to model discrepancy: Calibration and optimal solution updating

This article introduces a novel approach which uses limited high-fidelity data to calibrate the model discrepancy in a Bayesian framework and propagate it through the optimization problem, providing both an improvement in the optimal solution and a characterization of uncertainty due to the limited accessibility of high- fiDelity data.

Enabling and interpreting hyper-differential sensitivity analysis for Bayesian inverse problems

This article proposes using hyper-differential sensitivity analysis (HDSA) to assess the sensitivity of the maximum a posteriori probability (MAP) and the Laplace approximation of the posterior covariance to changes in the auxiliary parameters.

Statistical Properties of the Probabilistic Numeric Linear Solver BayesCG

This work analyses the calibration of BayesCG under the Krylov prior, a probabilistic numeric extension of the Conjugate Gradient method for solving systems of linear equations with symmetric positive definite coefficient matrix and proposes two test statistics that are necessary but not suflcient for calibration: the Z -statistic and the new S-statistic.

BayesCG As An Uncertainty Aware Version of CG

This work’s CG-based implementation of BayesCG under a structure-exploiting prior distribution represents an ’uncertainty-aware’ version of CG that consists of CG iterates and posterior covariances that can be propagated to subsequent computations.

A new perspective on parameter study of optimization problems

We provide a new perspective on the study of parameterized optimization problems. Our approach combines methods for post-optimal sensitivity analysis and ordinary differential equations to quantify

Active Slices for Sliced Stein Discrepancy

This work provides theoretical results stating that the requirement of using optimal slicing directions in the kernelized version of SSD can be relaxed and proposes a fast algorithm for finding such slicing directions based on ideas of active sub-space construction and spectral decomposition.



Randomized Generalized Singular Value Decomposition

This paper uses random projections to capture the most of the action of the matrices and proposes randomized algorithms for computing a low-rank approximation of the generalized singular value decomposition of two matrices.

Tikhonov Regularization and Randomized GSVD

A regularization method, combining Tikhonov regularization in general form with the truncated GSVD is proposed, which can achieve good accuracy with less computational time and memory requirement than the classical regularization methods.

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.

Randomized algorithms for large-scale inverse problems with general Tikhonov regularizations

This paper investigates randomized algorithms for solving large-scale linear inverse problems with general Tikhonov regularizations and applies randomized algorithms to reduce large- scale systems of standard form to much smaller-scale systems and seeks their regularized solutions in combination with some popular choice rules for regularization parameters.

Stability Analysis of QR factorization in an Oblique Inner Product

This paper considers the stability of the QR factorization in an oblique inner product and analyzes two algorithm that are based a factorization of A and converting the problem to the Euclidean case using the Cholesky decomposition and the eigenvalue decomposition.

Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

We present a fast algorithm for the total variation regularization of the $3$-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting

Randomized algorithms for generalized Hermitian eigenvalue problems with application to computing Karhunen–Loève expansion

The error analysis shows that the randomized algorithm is most accurate when the generalized singular values of B−1A decay rapidly, and the performance of the algorithm on computing an approximation to the Karhunen–Loève expansion is demonstrated.

Towards a Generalized Singular Value Decomposition

We suggest a form for, and give a constructive derivation of, the generalized singular value decomposition of any two matrices having the same number of columns. We outline its desirable

Randomized Subspace Iteration: Analysis of Canonical Angles and Unitarily Invariant Norms

  • A. Saibaba
  • Computer Science, Mathematics
    SIAM J. Matrix Anal. Appl.
  • 2019
Three different kinds of bounds for the low-rank approximation in any unitarily invariant norm (including the Schatten-p norm) are derived, which generalizes the bounds for Spectral and Frobenius norms found in the literature.

Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion

Preface Symbols and Acronyms 1. Setting the Stage. Problems With Ill-Conditioned Matrices Ill-Posed and Inverse Problems Prelude to Regularization Four Test Problems 2. Decompositions and Other