Computing Popov and Hermite forms of polynomial matrices

  title={Computing Popov and Hermite forms of polynomial matrices},
  author={Gilles Villard},
  booktitle={International Symposium on Symbolic and Algebraic Computation},
  • G. Villard
  • Published in
    International Symposium on…
    1 October 1996
  • Mathematics
For a polynomial matrix P(z) of degree d in M~,~(K[z]) where K is a commutative field, a reduction to the Hermite normal form can be computed in O (ndM(n) + M(nd)) arithmetic operations if M(n) is the time required to multiply two n x n matrices over K. Further, a reduction can be computed using O(log~+’ (ml)) pamlel arithmetic steps and O(L(nd) ) processors if the same processor bound holds with time O (logX (rid)) for determining the lexicographically first maximal linearly independent subset… 

Fast Computation of Shifted Popov Forms of Polynomial Matrices via Systems of Modular Polynomial Equations

We give a Las Vegas algorithm which computes the shifted Popov form of an m x m nonsingular polynomial matrix of degree d in expected ~O(mω d) field operations, where ω is the exponent of matrix

Hermite form computation of matrices of differential polynomials

The Hermite form H of A is computed by reducing the problem to solving a linear system of equations over F(t), which requires a polynomial number of operations in F in terms of the input sizes: n, degDA, and degtA.

Normal forms for general polynomial matrices

Fast Parallel Algorithms for Matrix Reduction to Normal Forms

  • G. Villard
  • Mathematics, Computer Science
    Applicable Algebra in Engineering, Communication and Computing
  • 1997
Abstract. We investigate fast parallel algorithms to compute normal forms of matrices and the corresponding transformations. Given a matrix B in ℳn,n(K), where K is an arbitrary commutative field, we

Computing Popov Forms of Polynomial Matrices

A Las Vegas algorithm that computes the Popov decomposition of matrices of full row rank is given and it is shown that the problem of transforming a row reduced matrix to Popov form is at least as hard as polynomial matrix multiplication.

Triangular Factorization of Polynomial Matrices

A simple algorithm to recover H when working over the polynomial ring Fx], F a eld is described, which requires O(n 3 d 2) eld operations when A 2 Fx] nn is nonsingular with degrees of entries bounded by d, and the matrix U is recovered in the same time.

Algorithms for normal forms for matrices of polynomials and ore polynomials

A fraction-free algorithm for row reduction for matrices of Ore polynomials is obtained by formulating row reduction as a linear algebra problem and this algorithm is used as a basis to formulate modular algorithms for computing a row-reduced form, a weak Popov forms, and the Popov form of a polynomial matrix.

Converting between the Popov and the Hermite form of matrices of differential operators using an FGLM-like algorithm

We consider matrices over a ring K [∂; σ, θ ] of Ore polynomials over a skew field K . Since the Popov and Hermite normal forms are both Gröbner bases (for term over position and position over term

On the complexity of polynomial matrix computations

Under the straight-line program model, it is shown that multiplication is reducible to the problem of computing the coefficient of degree <i>d</i> of the determinant and algorithms for minimal approximant computation and column reduction that are based on polynomial matrix multiplication are proposed.

Computing hermite forms of polynomial matrices

This paper presents a new algorithm for computing the Hermite form of a polynomial matrix that is both softly linear in the degree d and softly cubic in the dimension n, and is randomized of the Las Vegas type.



Greatest common divisor of several polynomials

  • S. Barnett
  • Mathematics
    Mathematical Proceedings of the Cambridge Philosophical Society
  • 1971
Abstract Given a polynomial a(λ) with degree n, and polynomials b1(λ), …, bm(λ) of degree not greater than n – 1, then the degree k of the greatest common divisor of the polynomials is equal to the

A matrix pencil based numerical method for the computation of the GCD of polynomials

The method defines the exact degree of GCD, works satisfactorily with any number of polynomials and evaluates successfully approximate solutions.

The Minimal polynomials, characteristic subspaces, normal bases and the frobenius form

Various algorithms connected with the computation of the minimal polynomial of a square n x n matrix over a field k are presented here and the complexity obtained is better than for the heretofore best known deterministic algorithm.

Asymptotically fast computation of Hermite normal forms of integer matrices

This paper presents a new algorithm for computing the Hermite normal form H of an A Z n m of rank m to gether with a unimodular pre multiplier matrix U such that UA H Our algorithm requires O m nM m

Worst-Case Complexity Bounds on Algorithms for Computing the Canonical Structure of Finite Abelian Groups and the Hermite and Smith Normal Forms of an Integer Matrix

The upper bounds derived on the computational complexity of the algorithms above improve the upper bounds given by Kannan and Bachem in [SIAM J. Comput., 8 (1979), pp. 499–507].

Multipolynomial resultants and linear algebra

The problem of eliminating variables from a set of polynomial equations arises in many symbolic and numeric applications. The three main approaches are resultants, Grobner bases and the Wu-Ritt

Fast Parallel Computation of the Polynomial Remainder Sequence Via Bezout and Hankel Matrices

All the coefficients of the polynomials generated by the Euclidean scheme applied to u and v can be computed by using O(\log^3 n) parallel arithmetic steps and $n^2/\log n$ processors over any field of characteristic 0 supporting FFT (Fast Fourier Transform).

Processor efficient parallel solution of linear systems over an abstract field

Parallel randomized algorithms are presented that solve n-dimensional systems of linear equations and compute inverses of n × n non-singular matrices over a field in O((log n)) time, where each time

Solving homogeneous linear equations over GF (2) via block Wiedemann algorithm

A method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements, is proposed and an algorithm due to Wiedemann is modified, which is competitive with structured Gaussian elimination in terms of time and has much lower space requirements.