A fraction free Matrix Berlekamp/Massey algorithm

@article{Kaltofen2013AFF,
  title={A fraction free Matrix Berlekamp/Massey algorithm},
  author={Erich L. Kaltofen and George Yuhasz},
  journal={Linear Algebra and its Applications},
  year={2013},
  volume={439},
  pages={2515-2526}
}

Figures and Tables from this paper

Linear Algebra for Computing Gröbner Bases of Linear Recursive Multidimensional Sequences
TLDR
An FGLM-like algorithm for finding the relations in the table is produced, which lets us use linear algebra techniques and make use of fast structured linear algebra similarly to the Hankel interpretation of Berlekamp--Massey.
Fast estimates of Hankel matrix condition numbers and numeric sparse interpolation
TLDR
It is demonstrated by experiments that the Gohberg-Semencul formula for the inverse of a Hankel matrix to compute estimates for the structured condition numbers of all arising Hankel matrices in quadratic arithmetic time overall lead to a viable termination criterion for polynomials with about 20 non-zero terms and of degree about 100, even in the presence of noise of relative magnitude 10-5.
Common Factors in Fraction-Free Matrix Decompositions
TLDR
It is shown that fraction-free Gauß–Bareiss reduction leads to triangular matrices having a non-trivial number of common row factors in theLUandQRmatrix decompositions using exact computations.
Sparse Polynomial Hermite Interpolation
TLDR
These algorithms generalize to multivariate polynomials, higher derivatives and sparsity with respect to Chebyshev polynomial bases, and have algorithms that can correct errors in the points by oversampling at a limited number of good values.
Numerical Sparsity Determination and Early Termination
TLDR
An algorithm is given that can be used to compute the sparsity and estimate the minimal number of samples needed in numerical sparse interpolation and the early termination strategy of polynomial interpolation has been incorporated in the algorithm.
On the matrix feedback shift register synthesis for matrix sequences
In this paper, a generalization of the linear feedback shift register synthesis problem is presented for synthesizing minimum-length matrix feedback shift registers (MFSRs for short) to generate
Sparse Interpolation With Errors in Chebyshev Basis Beyond Redundant-Block Decoding
TLDR
Sparse interpolation algorithms for recovering a polynomial with LaTeX terms from inline-formula evaluations at distinct values for the variable with Chebyshev Basis, which return a list of valid sparse interpolants for the algorithm.
Sparse Polynomial Interpolation and Testing
TLDR
Two methods for the interpolation of a sparse polynomial modelled by a straight-line program (SLP): a sequence of arithmetic instructions and an alternative method of randomized Kronecker substitutions that can more efficiently reconstruct a sparse interpolant f from multiple univariate images of considerably reduced degree.
...
...

References

SHOWING 1-10 OF 29 REFERENCES
On the matrix berlekamp-massey algorithm
TLDR
This work analyzes the Matrix Berlekamp/Massey algorithm and gives new proofs of correctness and complexity for the algorithm, which is based on self-contained loop invariants and includes an explicit termination criterion for a given determinantal degree bound of the minimal matrix generator.
Algorithms for computing the sparsest shifts of polynomials via the Berlekamp/Massey algorithm
TLDR
A fraction-free version of the Berlekamp/Massey algorithm is given, which does not require rational numbers or functions and GCD operations on the arising numerators and denominators and is more efficient than the classical extended Euclidean algorithm.
A minimal realization algorithm for matrix sequences
We give an algorithm for solving the Pade approximation problem for matrix sequences over an arbitrary field. The algorithm is a multivariate version of one first proposed by Berlekamp and Massey in
Fraction-free computation of matrix Padé systems
TLDR
A fraction-free approach to the computation of matrix Pad& systems by determining a modified Schur complement for the coefficient matrices of the linear systems of equations that are associated to matrix Pad &approximation problems.
Solving homogeneous linear equations over GF (2) via block Wiedemann algorithm
TLDR
A method of solving large sparse systems of homogeneous linear equations over G F ( 2 ) , the field with two elements, is proposed and an algorithm due to Wiedemann is modified, which is competitive with structured Gaussian elimination in terms of time and has much lower space requirements.
Algebraic coding theory
  • E. Berlekamp
  • Computer Science
    McGraw-Hill series in systems science
  • 1968
This is the revised edition of Berlekamp's famous book, "Algebraic Coding Theory," originally published in 1968, wherein he introduced several algorithms which have subsequently dominated engineering
On Euclid's Algorithm and the Theory of Subresultants
TLDR
An elementary treatment of the theory of subresultants is presented, and the relationship of the sub resultants of a given pair of polynomials to their polynomial remainder sequence as determined by Euclid's algorithm is examined.
Fraction-Free Computation of Matrix Rational Interpolants and Matrix GCDs
TLDR
A new set of algorithms for computation of matrix rational interpolants and one-sided matrix greatest common divisors, suitable for computation in exact arithmetic domains where growth of coefficients in intermediate computations is a central concern.
On Euclid's algorithm and the computation of polynomial greatest common divisors
TLDR
This paper examines the computation of polynomial greatest common divisors by various generalizations of Euclid's algorithm, and it is shown that the modular algorithm is markedly superior.
Sylvester’s identity and multistep integer-preserving Gaussian elimination
A method is developed which permits integer-preserving elimination in systems of linear equations, AX = B, such that (a) the magnitudes of the coefficients in the transformed matrices are minimized,
...
...