On the complexity of polynomial matrix computations

@inproceedings{Giorgi2003OnTC,
  title={On the complexity of polynomial matrix computations},
  author={Pascal Giorgi and Claude-Pierre Jeannerod and Gilles Villard},
  booktitle={ISSAC '03},
  year={2003}
}
We study the link between the complexity of polynomial matrix multiplication and the complexity of solving other basic linear algebra problems on polynomial matrices. By polynomial matrices we mean <i>n</i>times <i>n</i> matrices in <b>K</b>[<i>x</i>] of degree bounded by <i>d</i>, with <b>K</b> a commutative field. Under the straight-line program model we show that multiplication is reducible to the problem of computing the coefficient of degree <i>d</i> of the determinant. Conversely, we… 

Computing column bases of polynomial matrices

This paper presents a deterministic algorithm for the computation of a column basis of an input matrix with m x n input matrix, and shows that the average column degree is bounded by the commonly used matrix degree that is also the maximum column degree of the input matrix.

Computing hermite forms of polynomial matrices

This paper presents a new algorithm for computing the Hermite form of a polynomial matrix that is both softly linear in the degree d and softly cubic in the dimension n, and is randomized of the Las Vegas type.

Unimodular completion of polynomial matrices

The algorithm computes a unimodular completion for a right cofactor of a column basis of <b>F</b>, or equivalently, compute a completion that preserves the generalized determinant.

Computing minimal nullspace bases

A deterministic algorithm for the computation of a minimal nullspace basis of an input matrix of univariate polynomials over a field K with <i>m</i> ≤ <i*n</i>.

Exact computations on polynomial and integer matrices

One may consider that the algebraic complexity of basic linear algebra over an abstract field K is well known. Indeed, if ω is the exponent of matrix multiplication over K, then for instance

Computing the rank and a small nullspace basis of a polynomial matrix

A rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension, n and degree, d, and the soft-O notation O~ indicates some missing logarithmic factors.

Efficient computation of order bases

The algorithm extends earlier work of Storjohann, whose method can be used to find a subset of an order basis that is within a specified degree bound δ using <i>O</i>~(MM(<i>n,δ</i))) field operations for δ≥⌈ <i-m</i><sup>ω</sup>⌉/<i-n</i>, and order σ.

Fast Computation of Shifted Popov Forms of Polynomial Matrices via Systems of Modular Polynomial Equations

We give a Las Vegas algorithm which computes the shifted Popov form of an m x m nonsingular polynomial matrix of degree d in expected ~O(mω d) field operations, where ω is the exponent of matrix

The M4RIE library for dense linear algebra over small fields with even characteristic

A specialisation of precomputation tables to F<sub>2</sub>e, called Newton-John tables in this work, to avoid scalar multiplications in Gaussian elimination and matrix multiplication and an efficient implementation of Karatsuba-style multiplication for matrices over extension fields of F2.
...

References

SHOWING 1-10 OF 26 REFERENCES

Fast computation of linear generators for matrix sequences and application to the block Wiedemann algorithm

In this paper we describe how the half-gcd algorithm can be adapted in order to speed up the sequential stage of Coppersmith's block Wiedemann algorithm for solving large sparse linear systems over

On fast multiplication of polynomials over arbitrary algebras

This paper generalizes the well-known Sch6nhage-Strassen algorithm for multiplying large integers to an algorithm for dividing polynomials with coefficients from an arbitrary, not necessarily commutative, not always associative, algebra d, and obtains a method not requiring division that is valid for any algebra.

Computing Popov and Hermite forms of polynomial matrices

These results are obtamed by applying in the matrix case, the techniques used in the scalar case of the gcd of polynomials to the Hermite normal form.

Gaussian elimination is not optimal

t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical

Matrix-free linear system solving and applications to symbolic computation

The Black Box Berlekamp algorithm is a matrix-free linear system solver based upon the Block Wiedemann algorithm for the factorization of high-degree polynomials over finite fields and it is proved that a random Toeplitz matrix is non-singular with probability $1-1/q$.

Solving homogeneous linear equations over GF (2) via block Wiedemann algorithm

A method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements, is proposed and an algorithm due to Wiedemann is modified, which is competitive with structured Gaussian elimination in terms of time and has much lower space requirements.

Black box linear algebra with the linbox library

This dissertation discusses preconditioners based on Benes networks to localize the linear independence of a black box matrix and introduces a technique to use determinantal divisors to find preconditionsers that ensure the cyclicity of nonzero eigenvalues.

High-order lifting and integrality certification

Triangular Factorization and Inversion by Fast Matrix Multiplication

The fast matrix multiplication algorithm by Strassen is used to obtain the triangular factorization of a permutation of any nonsingular matrix of order n in 2.35, i.e. if n > (2.35)5 t 100. Strassen