Fast Computation of Shifted Popov Forms of Polynomial Matrices via Systems of Modular Polynomial Equations

@article{Neiger2016FastCO,
  title={Fast Computation of Shifted Popov Forms of Polynomial Matrices via Systems of Modular Polynomial Equations},
  author={Vincent Neiger},
  journal={Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation},
  year={2016}
}
  • Vincent Neiger
  • Published 1 February 2016
  • Mathematics, Computer Science
  • Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation
We give a Las Vegas algorithm which computes the shifted Popov form of an m x m nonsingular polynomial matrix of degree d in expected ~O(mω d) field operations, where ω is the exponent of matrix multiplication and ~O(·) indicates that logarithmic factors are omitted. This is the first algorithm in ~O(mω d) for shifted row reduction with arbitrary shifts. Using partial linearization, we reduce the problem to the case d ≤ ⌈ σ/m ⌉ where σ is the generic determinant bound, with σ / m bounded from… 

Tables from this paper

Computing Matrix Canonical Forms of Ore Polynomials
TLDR
This thesis presents algorithms to compute canonical forms of non-singular input matrix of Ore polynomials while controlling intermediate expression swell, and uses the recent advances in polynomial matrix computations to describe an algorithm that computes the transformation matrix U such that UA = P.
Computing Canonical Bases of Modules of Univariate Relations
TLDR
The triangular shape of M is exploited to generalize a divide-and-conquer approach which originates from fast minimal approximant basis algorithms and relies on high-order lifting to perform fast modular products of polynomial matrices of the form P F mod M.
Fast Matrix Multiplication and Symbolic Computation
TLDR
The unfinished history of decreasing the exponent towards its information lower bound 2 is surveyed, some important techniques discovered and linked to other fields of computing are recalled, sample surprising applications to fast computation of the inner products of two vectors and summation of integers are revealed, and the curse of recursion is discussed.
Rank-Sensitive Computation of the Rank Profile of a Polynomial Matrix
TLDR
This work gives an algorithm which improves the minimal kernel basis algorithm of Zhou, Labahn, and Storjohann and provides a second algorithm which computes the column rank profile of F with a rank-sensitive complexity of O~ (rw-2n(m+d) operations in K.
Fast computation of approximant bases in canonical form
Computing Popov and Hermite Forms of Rectangular Polynomial Matrices
TLDR
Deterministic, fast algorithms for rectangular input matrices for normal forms for matrices over the univariate polynomials are presented.
Algorithms for simultaneous Hermite-Padé approximations
Fast Decoding of Codes in the Rank, Subspace, and Sum-Rank Metric
TLDR
A skew-analogue of the existing PM-Basis algorithm for matrices over ordinary polynomials is described, which captures the bulk of the work in multiplication of skew polynomial rings and the complexity benefit comes from existing algorithms performing this faster than in classical quadratic complexity.
Bases of relations in one or several variables: fast algorithms and applications. (Bases de relations en une ou plusieurs variables : algorithmes rapides et applications)
In this thesis, we study algorithms for a problem of finding relations in one or several variables. It generalizes that of computing a solution to a system of linear modular equations over a
...
...

References

SHOWING 1-10 OF 49 REFERENCES
Computing Popov and Hermite forms of polynomial matrices
TLDR
These results are obtamed by applying in the matrix case, the techniques used in the scalar case of the gcd of polynomials to the Hermite normal form.
Fast Computation of Minimal Interpolation Bases in Popov Form for Arbitrary Shifts
TLDR
To obtain the target cost for any shift, the properties of the output bases are strengthened, and of those obtained during the course of the algorithm: all the bases are computed in shifted Popov form, whose size is always O(m σ), and a divide-and-conquer scheme is designed.
Normal forms for general polynomial matrices
Triangular x-basis decompositions and derandomization of linear algebra algorithms over K[x]
Asymptotically fast computation of Hermite normal forms of integer matrices
This paper presents a new algorithm for computing the Hermite normal form H of an A Z n m of rank m to gether with a unimodular pre multiplier matrix U such that UA H Our algorithm requires O m nM m
Faster Algorithms for Multivariate Interpolation With Multiplicities and Simultaneous Polynomial Approximations
TLDR
This paper reduces this multivariate interpolation problem to a problem of simultaneous polynomial approximations, which is solved using fast structured linear algebra and improves the best known complexity bounds for the interpolation step of the list-decoding of Reed-Solomon codes, Parvaresh-Vardy codes, and folded Reed- Solomon codes.
A deterministic algorithm for inverting a polynomial matrix
Linear diophantine equations over polynomials and soft decoding of Reed-Solomon codes
  • M. Alekhnovich
  • Computer Science
    IEEE Transactions on Information Theory
  • 2005
TLDR
This paper gives another fast algorithm for the soft decoding of Reed-Solomon codes different from the procedure proposed by Feng, which works in time (w/r) O(1)nlog2n, where r is the rate of the code, and w is the maximal weight assigned to a vertical line.
On the complexity of polynomial matrix computations
TLDR
Under the straight-line program model, it is shown that multiplication is reducible to the problem of computing the coefficient of degree <i>d</i> of the determinant and algorithms for minimal approximant computation and column reduction that are based on polynomial matrix multiplication are proposed.
Ideal forms of Coppersmith's theorem and Guruswami-Sudan list decoding
TLDR
A framework for solving polynomial equations with size constraints on solutions is developed and powerful analogies from algebraic number theory allow us to identify the appropriate analogue of a lattice in each application and provide efficient algorithms to find a suitably short vector, thus allowing completely parallel proofs of the above theorems.
...
...