Time and space efficient generators for quasiseparable matrices

@article{Pernet2017TimeAS,
  title={Time and space efficient generators for quasiseparable matrices},
  author={Cl{\'e}ment Pernet and Arne Storjohann},
  journal={J. Symb. Comput.},
  year={2017},
  volume={85},
  pages={224-246}
}

Figures and Tables from this paper

A Two-pronged Progress in Structured Dense Matrix Vector Multiplication

This work unifies, generalizes, and simplifies existing state-of-the-art results in structured matrix-vector multiplication, and shows how applications in areas such as multipoint evaluations of multivariate polynomials can be reduced to problems involving low recurrence width matrices.

LDU factorization

This work proposes a generalization of the LEU algorithm to the case of a commutative domain and its field of quotients, and decomposes the matrix over the commutatives into a product of three matrices, in which the matrices L and U belong to the commUTative domain, and the elements of the weighted truncated permutation matrix D are the elements inverse to the product of some pair of minors.

Sparse matrices in computer algebra when using distributed memory: theory and applications: [preprint]

J. Dongarra puts attansion on the several difficult challenges of managing calculations on a cluster with distributed memory for algorithms with sparse matrices, and considers the class of block-recursive matrix algorithms, the most famous of them are standard and Strassen's block matrix multiplication, Schur andStrassen’s block-matrix inversion.

Improving the Complexity of Block Low-Rank Factorizations with Fast Matrix Arithmetic

A new BLR factorization algorithm is devised that, by recasting the operations so as to work on intermediate matrices of larger size, can exploit more efficiently fast matrix arithmetic.

Exploiting Fast Matrix Arithmetic in Block Low-Rank Factorizations

A new BLR factorization algorithm is devised that, by recasting the operations so as to work on intermediate matrices of larger size, can exploit more efficiently fast matrix arithmetic.

Recursive Matrix Algorithms in Commutative Domain for Cluster with Distributed Memory

This class of algorithms allows to obtain efficient parallel programs on clusters with distributed memory and to demonstrate a scalability of these programs is measured.

Calculation Managing on a Cluster with Distributed Memory

A scheme with multidispatching for management of such parallel computing processes for cluster with distributed memory is suggested and the results of experiments at the JSC RAS cluster MVS-10P are demonstrated.

Recursive Matrix Algorithms, Distributed Dynamic Control, Scaling, Stability

  • G. Malaschonok
  • Computer Science
    2019 Computer Science and Information Technologies (CSIT)
  • 2019
The report is devoted to the concept of creating block-recursive matrix algorithms for computing on a super-computer with distributed memory and dynamic decentralized control.

Twin-width V: linear minors, modular counting, and matrix multiplication

The notion of parity and linear minors of a matrix, which consists of iteratively replacing consecutive rows or consecutive columns with a linear combination of them is introduced, and an ad hoc algorithm to efficiently multiply two matrices of bounded twin-width is presented.

References

SHOWING 1-10 OF 37 REFERENCES

Computing with Quasiseparable Matrices

This paper shows the connection between the notion of quasiseparability and the rank profile matrix invariant, presented in [Dumas & al. ISSAC'15], and proposes an algorithm computing the quasieparable orders (rL,rU) in time O{n2sω-2} where s=max( rL, rU) and ω the exponent of matrix multiplication.

Algorithms to solve hierarchically semi-separable systems

A survey of the main results, including a proof for the formulas for LU-factorization that were given in the thesis of Lyon, the derivation of an explicit algorithm for ULV factorization and related Moore-Penrose inversion, a complexity analysis and a short account of the connection between the HSS and the SSS (sequen- tially semi-separable) case are given.

Fast algorithms for hierarchically semiseparable matrices

This paper generalizes the hierarchically semiseparable (HSS) matrix representations and proposes some fast algorithms for HSS matrices that are useful in developing fast‐structured numerical methods for large discretized PDEs, integral equations, eigenvalue problems, etc.

A bibliography on semiseparable matrices*

Currently there is a growing interest in semiseparable matrices and generalized semiseparable matrices. To gain an appreciation of the historical evolution of this concept, we present in this paper

Rank-profile revealing Gaussian elimination and the CUP matrix decomposition

A Givens-Weight Representation for Rank Structured Matrices

In this paper we introduce a Givens-weight representation for rank structured matrices, where the rank structure is defined by certain submatrices starting from the bottom left or upper right matrix

Fast computation of the rank profile matrix and the generalized Bruhat decomposition

Some Fast Algorithms for Sequentially Semiseparable Representations

An extended sequentially semiseparable (SSS) representation derived from time-varying system theory is used to capture the low-rank of the off-diagonal blocks of a matrix for the purposes of efficient computations and for sufficient descriptive richness to allow for backward stability in the computations.

Displacement ranks of matrices and linear equations