Shifted normal forms of polynomial matrices

  title={Shifted normal forms of polynomial matrices},
  author={Bernhard Beckermann and George Labahn and Gilles Villard},
  booktitle={ISSAC '99},
In t,his paper we st,ucly the problen~ of transforrniug, via invertible colu1tln opcrat.ious~ it matrix polyioruial into a varicty Of .shiftcd forms. Esarnplcs of forms c:overed in out frmwa-ork include a colunm rctluccd form: il triangular fornlz R I%!rInite IlOrInd fOrll1 or it Popov IlorIllal fOrIll alollg wit,11 their shifted courltcrpart,s. I3y obt.aiuiug tlcgrvc bounds for uuiniodiilar niiill,iplicrs of shifted Popor fornis we are able t,o c11lbct1 tlic probleni of conqmtiug il normal… 

A computational view on normal forms of matrices of Ore polynomials

This thesis treats normal forms of matrices over rings of Ore polynomials as well as a modular algorithm for computing a Jacobson normal form which is guaranteed to succeed in characteristic zero, but under certain conditions also yields a result in positive characteristic.

Algorithms for normal forms for matrices of polynomials and ore polynomials

A fraction-free algorithm for row reduction for matrices of Ore polynomials is obtained by formulating row reduction as a linear algebra problem and this algorithm is used as a basis to formulate modular algorithms for computing a row-reduced form, a weak Popov forms, and the Popov form of a polynomial matrix.

Normal forms for general polynomial matrices

Output-sensitive modular algorithms for polynomial matrix normal forms

Computing Matrix Canonical Forms of Ore Polynomials

This thesis presents algorithms to compute canonical forms of non-singular input matrix of Ore polynomials while controlling intermediate expression swell, and uses the recent advances in polynomial matrix computations to describe an algorithm that computes the transformation matrix U such that UA = P.

Computing Popov Form of Ore Polynomial Matrices

It is shown that the computation of the Popov form of Ore polynomial matrices can be formulated as a problem of computing the left nullspace of such matrices, and that recent fraction-free and modular algorithms for nullspace computation can be used in exact arithmetic setting where coefficient growth is a concern.

Algorithms for Linearly Recurrent Sequences of Truncated Polynomials

This paper focuses on sequences whose elements are vectors over the ring 𝔸=𝕂[x] /{xd} of truncated polynomials, and presents three methods for finding the ideal of their recurrence relations: a Berlekamp-Massey-like approach due to Kurakin, a minimal approximant basis, and one based on bivariate Padé approximation.

Triangular Factorization of Polynomial Matrices

A simple algorithm to recover H when working over the polynomial ring Fx], F a eld is described, which requires O(n 3 d 2) eld operations when A 2 Fx] nn is nonsingular with degrees of entries bounded by d, and the matrix U is recovered in the same time.

Computing Popov Forms of Polynomial Matrices

A Las Vegas algorithm that computes the Popov decomposition of matrices of full row rank is given and it is shown that the problem of transforming a row reduced matrix to Popov form is at least as hard as polynomial matrix multiplication.



Computing Popov and Hermite forms of polynomial matrices

These results are obtamed by applying in the matrix case, the techniques used in the scalar case of the gcd of polynomials to the Hermite normal form.

Asymptotically fast computation of Hermite normal forms of integer matrices

This paper presents a new algorithm for computing the Hermite normal form H of an A Z n m of rank m to gether with a unimodular pre multiplier matrix U such that UA H Our algorithm requires O m nM m

Preconditioning of rectangular polynomial matrices for efficient Hermite normal form computation

A Las Vegas probabilistic algorithm for reducing the computation of Hermite normal forms of rectangular polynomial matrices allows for the efficient computation of one-sided GCD’S of two matrix polynomials along with the solution of the matrix diophantine equation associated to such a GCD.

Rational matrix structure

  • G. VergheseT. Kailath
  • Mathematics
    1979 18th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes
  • 1979
Recent work1-9, has brought out the importance of a closer examination of the pole/zero and vector-space structure of rational matrices G(s). Results developed by several people are brought together

Solving Systems of Linear Equations over Polynomials

  • R. Kannan
  • Mathematics, Computer Science
    Theor. Comput. Sci.
  • 1985

Minimal Bases of Rational Vector Spaces, with Applications to Multivariable Linear Systems

It is shown how minimal bases can be used to factor a transfer function matrix G in the form $G = ND^{ - 1} $, where N and D are polynomial matrices that display the controllability indices of G and its controller canonical realization.

A Uniform Approach for the Fast Computation of Matrix-Type Pade Approximants

A recurrence relation is presented for the computation of a basis for the corresponding linear solution space of these approximants, and these methods result in fast (and superfast) reliable algorithms for the inversion of stripedHankel, layered Hankel, and (rectangular) block-Hankel matrices.

Extended GCD and Hermite Normal Form Algorithms via Lattice Basis Reduction

An algorithm which uses lattice basis reduction to produce small integer multipliers for the equation s = gcd (s(1), ..., s(m) = x(1)s (1) + ... + x( m)s(m), where s1, ... , s (m) are given integers.

Fraction-free computation of matrix Padé systems

A fraction-free approach to the computation of matrix Pad& systems by determining a modified Schur complement for the coefficient matrices of the linear systems of equations that are associated to matrix Pad &approximation problems.