Parallel Two-Stage Reduction of a Regular Matrix Pair to Hessenberg-Triangular Form

@inproceedings{Adlerborn2000ParallelTR,
  title={Parallel Two-Stage Reduction of a Regular Matrix Pair to Hessenberg-Triangular Form},
  author={Bj{\"o}rn Adlerborn and Krister Dackland and Bo K{\aa}gstr{\"o}m},
  booktitle={PARA},
  year={2000}
}
A parallel two-stage algorithm for reduction of a regular matrix pair (A,B) to Hessenberg-triangular form (H, T) is presented. Stage one reduces the matrix pair to a block upper Hessenberg-triangular form (Hr, T), where Hr is upper r-Hessenberg with r > 1 subdiagonals and T is upper triangular. In stage two, the desired upper Hessenberg-triangular form is computed using two-sided Givens rotations. Performance results for the ScaLAPACK-style implementations show that the parallel algorithms can… 

Parallel Reduction of a Block Hessenberg-Triangular Matrix Pair to Hessenberg-Triangular Form — Algorithm Design and Performance Results

TLDR
Performance results for the ScaLAPACK-style implementation show that the parallel algorithm can be used to solve large scale problems effectively.

Parallel and Blocked Algorithms for Reduction of a Regular Matrix Pair to Hessenberg-Triangular and Generalized Schur Forms

TLDR
Algorithm and implementation issues regarding the single-/double-shift QZ algorithm are discussed and multishift strategies to enhance the performance in blocked as well as in parallell variants of the QZ method are described.

A parallel Schur method for solving continuous-time algebraic Riccati equations

TLDR
It is shown that the Schur method, based on computing the stable invariant subspace of a Hamiltonian matrix, can be parallelized in an efficient and scalable way.

Parallel Solvers for Sylvester-Type Matrix Equations with Applications in Condition Estimation, Part I

Parallel ScaLAPACK-style algorithms for solving eight common standard and generalized Sylvester-type matrix equations and various sign and transposed variants are presented. All algorithms are

RECSY and SCASY Library Software: Recursive Blocked and Parallel Algorithms for Sylvester-Type Matrix Equations with Some Applications

In this contribution, we review state-of-the-art high-performance computing software for solving common standard and generalized continuous-time and discrete-time Sylvester-type matrix equations. The

Contributions to Parallel Algorithms for Sylvester-type Matrix Equations and Periodic Eigenvalue Reordering in Cyclic Matrix Products

TLDR
A direct method for periodic eigen value reordering in the periodic real Schur form which extends earlier work on the standard and the generalized eigenvalue problems.

Efficient Reduction from Block Hessenberg Form to Hessenberg Form Using Shared Memory

TLDR
A new cache-efficient algorithm for reduction from block Hessenberg form to Hessenburg form with one level of look-ahead in combination with a dynamic load-balancing scheme reduces the idle time and allows the use of coarse-grained tasks.

Distributed One-Stage Hessenberg-Triangular Reduction with Wavefront Scheduling

TLDR
A novel parallel formulation of Hessenberg-triangular reduction of a regular matrix pair on distributed memory computers is presented, based on a sequential cache-blocked algorit ...

I/O Efficient Algorithms for Matrix Computations

TLDR
It is shown that techniques like rescheduling of computational steps, appropriate choosing of the blocking parameters and incorporating of more matrix-matrix operations, can be used to improve the I/O and seek complexities of matrix computations.

Algorithms and Library Software for Periodic and Parallel Eigenvalue Reordering and Sylvester-Type Matrix Equations with Condition Estimation

This Thesis contains contributions in two different but closely related subfields of Scientific and Parallel Computing which arise in the context of various eigenvalue problems: periodic and parall

References

SHOWING 1-10 OF 16 REFERENCES

Reduction of a Regular Matrix Pair (A, B) to Block Hessenberg Triangular Form

TLDR
It is shown how an elementwise algorithm can be reorganized in terms of blocked factorizations and higher level BLAS operations, and several ways to annihilate elements are compared.

A ScaLAPACK-Style Algorithm for Reducing a Regular Matrix Pair to Block Hessenberg-Triangular Form

TLDR
It is shown how a sequential elementwise algorithm can be reorganized in terms of blocked factorizations and matrix-matrix operations to form a parallel algorithm for reduction of a regular matrix pair to block Hessenberg-triangular form.

Blocked algorithms and software for reduction of a regular matrix pair to generalized Schur form

TLDR
A two-stage blocked algorithm for reduction of a regular matrix pair (<italic>A , B </italic>) to upper Hessenberg-triangular form is presented and a blocked variant of the single-diagonal double-shift QZ method for computing the generalized Schur form of (<itali>A, B</italic>, which outperforms the current LAPACK routines by a factor 2-5 for sufficiently large problems.

A note on the efficient solution of matrix pencil systems

TLDR
Algorithms for solving matrix pencil systems of linear equations, of the form (A+γB)x=c+γd, are developed and analysed and numerical results are presented which demonstrate the advantages of the new techniques.

An Algorithm for Generalized Matrix Eigenvalue Problems.

A new method, called the $QZ$ algorithm, is presented for the solution of the matrix eigenvalue problem $Ax = \lambda Bx$ with general square matrices A and B. Particular attention is paid to the

A Hierarchical Approach for Performance Analysis of ScaLAPACK-Based Routines Using the Distributed Linear Algebra Machine

TLDR
An hierarchical approach for design of performance models for parallel algorithms in linear algebra based on a parallel machine model and the hierarchical structure of the ScaLAPACK library is presented.

LAPACK Users' Guide, Third Edition

ScaLAPACK Users' Guide

TLDR
This book is very referred for you because it gives not only the experience but also lesson, it is about this book that will give wellness for all people from many societies.

Matrix computations

A Storage-Efficient $WY$ Representation for Products of Householder Transformations

TLDR
This note describes a storage efficient way to implement the WY representation of Q and shows how the matrix Q can be expressed in the form $Q = I + YTY^{T}$ where $Y \epsilon R^{mxr}$ and $T £ with T upper triangular requires less storage.