#### Filter Results:

#### Publication Year

1998

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

We study the link between the complexity of polynomial matrix multiplication and the complexity of solving other basic linear algebra problems on polynomial matrices. By polynomial matrices we mean <i>n</i>times <i>n</i> matrices in <b>K</b>[<i>x</i>] of degree bounded by <i>d</i>, with <b>K</b> a commutative field. Under the straight-line program model we… (More)

We present an inversion algorithm for nonsingular n × n matrices whose entries are degree d polynomials over a field. The algorithm is deterministic and, when n is a power of two, requires O˜(n 3 d) field operations for a generic input; the soft-O notation O˜indicates some missing log(nd) factors. Up to such logarithmic factors, this asymptotic complexity… (More)

Linear systems with structures such as Toeplitz, Vandermonde or Cauchy-likeness can be solved in O˜(α 2 n) operations, where n is the matrix size, α is its displacement rank, and O˜denotes the omission of logarithmic factors. We show that for such matrices, this cost can be reduced to O˜(α ω−1 n), where ω is a feasible exponent for matrix multiplication… (More)

In this paper, we study the problem of computing an LSP-decomposition of a matrix over a field. This decomposition is an extension to arbitrary matrices of the well-known LUP-decomposition of full row-rank matrices. We present three different ways of computing an LSP-decomposition, that are both rank-sensitive and based on matrix multiplication. In each… (More)

Transforming a matrix over a field to echelon form, or decomposing the matrix as a product of structured matrices that reveal the rank profile, is a fundamental building block of computational exact linear algebra. This paper surveys the well known variations of such decompositions and transformations that have been proposed in the literature. We present an… (More)

Given two floating-point vectors x, y of dimension n and assuming rounding to nearest, we show that if no underflow or overflow occurs, any evaluation order for inner product returns a floating-point number r such that | r − x T y| nu|x| T |y| with u the unit roundoff. This result, which holds for any radix and with no restriction on n, can be seen as a… (More)

—In this paper we show how to reduce the computation of correctly-rounded square roots of binary floating-point data to the fixed-point evaluation of some particular integer polynomials in two variables. By designing parallel and accurate evaluation schemes for such bivariate polynomials, we show further that this approach allows for high instruction-level… (More)

In this articlc, \ve study square matrices pcrturbcd by a pa-rarncter E. An efficient algorithm conlputing the z-espansiou of the eigcnvalucs in forinal Laurent-Puiseux series is provided , for \vhicli the computation of the characteristic polynomial is not rccluired. 15;e show 110~ to reduce the init,ial mat.ris so t.hat. the Lidskii-Edelman-Yla… (More)

— This paper presents some work in progress on fast and accurate floating-point arithmetic software for ST200-based embedded systems. We show how to use some key architectural features to design codes that achieve correct rounding-to-nearest without sacrificing for efficiency. This is illustrated with the square root function, whose implementation given… (More)