On the complexity of polynomial matrix computations

@inproceedings{Giorgi2003OnTC,
  title={On the complexity of polynomial matrix computations},
  author={Pascal Giorgi and Claude-Pierre Jeannerod and Gilles Villard},
  booktitle={ISSAC '03},
  year={2003}
}
We study the link between the complexity of polynomial matrix multiplication and the complexity of solving other basic linear algebra problems on polynomial matrices. By polynomial matrices we mean <i>n</i>times <i>n</i> matrices in <b>K</b>[<i>x</i>] of degree bounded by <i>d</i>, with <b>K</b> a commutative field. Under the straight-line program model we show that multiplication is reducible to the problem of computing the coefficient of degree <i>d</i> of the determinant. Conversely, we… 
Computing column bases of polynomial matrices
TLDR
This paper presents a deterministic algorithm for the computation of a column basis of an input matrix with m x n input matrix, and shows that the average column degree is bounded by the commonly used matrix degree that is also the maximum column degree of the input matrix.
Computing hermite forms of polynomial matrices
TLDR
This paper presents a new algorithm for computing the Hermite form of a polynomial matrix that is both softly linear in the degree d and softly cubic in the dimension n, and is randomized of the Las Vegas type.
Unimodular completion of polynomial matrices
TLDR
The algorithm computes a unimodular completion for a right cofactor of a column basis of <b>F</b>, or equivalently, compute a completion that preserves the generalized determinant.
Computing minimal nullspace bases
TLDR
A deterministic algorithm for the computation of a minimal nullspace basis of an input matrix of univariate polynomials over a field K with <i>m</i> ≤ <i*n</i>.
Exact computations on polynomial and integer matrices
One may consider that the algebraic complexity of basic linear algebra over an abstract field K is well known. Indeed, if ω is the exponent of matrix multiplication over K, then for instance
Computing the rank and a small nullspace basis of a polynomial matrix
TLDR
A rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension, n and degree, d, and the soft-O notation O~ indicates some missing logarithmic factors.
Lattice Compression of Polynomial Matrices
  • Chao Li
  • Computer Science, Mathematics
  • 2007
TLDR
This thesis proves that there is a positive probability that L(A) = L(AB) with k(s+1) = Θ(log md) and designs a competitive probabilistic lattice compression algorithm of the Las Vegas type that has apositive probability of success on any input and requires O~(nmθ−1B(d)) field operations.
Recent progress in linear algebra and lattice basis reduction
  • G. Villard
  • Computer Science, Mathematics
    ISSAC '11
  • 2011
TLDR
This talk introduces basic tools for understanding how to generalize the Lehmer and Knuth-Schönhage gcd algorithms for basis reduction, and considers bases given by square matrices over K[x] or Z, with the notion of reduced form and LLL reduction.
...
...

References

SHOWING 1-10 OF 30 REFERENCES
Fast computation of linear generators for matrix sequences and application to the block Wiedemann algorithm
In this paper we describe how the half-gcd algorithm can be adapted in order to speed up the sequential stage of Coppersmith's block Wiedemann algorithm for solving large sparse linear systems over
On fast multiplication of polynomials over arbitrary algebras
TLDR
This paper generalizes the well-known Sch6nhage-Strassen algorithm for multiplying large integers to an algorithm for dividing polynomials with coefficients from an arbitrary, not necessarily commutative, not always associative, algebra d, and obtains a method not requiring division that is valid for any algebra.
Computing Popov and Hermite forms of polynomial matrices
TLDR
These results are obtamed by applying in the matrix case, the techniques used in the scalar case of the gcd of polynomials to the Hermite normal form.
Gaussian elimination is not optimal
t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical
Matrix-free linear system solving and applications to symbolic computation
TLDR
The Black Box Berlekamp algorithm is a matrix-free linear system solver based upon the Block Wiedemann algorithm for the factorization of high-degree polynomials over finite fields and it is proved that a random Toeplitz matrix is non-singular with probability $1-1/q$.
Solving homogeneous linear equations over GF (2) via block Wiedemann algorithm
TLDR
A method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements, is proposed and an algorithm due to Wiedemann is modified, which is competitive with structured Gaussian elimination in terms of time and has much lower space requirements.
Black box linear algebra with the linbox library
TLDR
This dissertation discusses preconditioners based on Benes networks to localize the linear independence of a black box matrix and introduces a technique to use determinantal divisors to find preconditionsers that ensure the cyclicity of nonzero eigenvalues.
High-order lifting and integrality certification
A uniform approach for the fast computation of Matrix-type Padé approximants
TLDR
A recurrence relation is presented for the computation of a basis for the corresponding linear solution space of these approximants, which generalizes previous work by Van Barel and Bultheel and, in a more general form, by Beckermann.
Subquadratic Computation of Vector Generating Polynomials and Improvement of the Block Wiedemann Algorithm
TLDR
A new algorithm for computing linear generators (vector generating polynomials) for matrix sequences, running in subquadratic time, applies in particular to the sequential stage of Coppersmith?s block Wiedemann algorithm.
...
...