François-Henry Rouet

Learn More
The inverse of an irreducible sparse matrix is structurally full, so that it is impractical to think of computing or storing it. However, there are several applications where a subset of the entries of the inverse is required. Given a factorization of the sparse matrix held in out-of-core storage, we show how to compute such a subset eciently, by accessing(More)
We consider the solution of sparse linear systems using direct methods via LU factorization. Unless the matrix is positive definite, numerical pivoting is usually needed to ensure stability, which is costly to implement especially in the sparse case. The Random Butterfly Transformations (RBT) technique provides an alternative to pivoting and is easily(More)
We present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element(More)
Hypergraph and graph partitioning tools are used to partition work for efficient parallelization of many sparse matrix computations. Most of the time, the objective function that is reduced by these tools relates to reducing the communication requirements, and the balancing constraints satisfied by these tools relate to balancing the work or memory(More)
We focus on memory scalability issues in multifrontal solvers like MUMPS. We illustrate why commonly used mapping strategies (e.g., a proportional mapping) cannot achieve a high memory efficiency. We propose a class of " memory-aware " algorithms that aim at maximizing performance under memory constraints. These algorithms provide both accurate memory(More)
We present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination , and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized(More)
We present a structured parallel geometry-based multifrontal sparse solver using hierarchically semiseparable (HSS) representations and exploiting the inherent low-rank structures. Parallel strategies for nested dissection ordering (taking low rankness into account), symbolic factorization, and structured numerical factorization are shown. In particular, we(More)
PDSLin is a general-purpose algebraic parallel hybrid (direct/iterative) linear solver based on the Schur complement method. The most challenging step of the solver is the computation of a preconditioner based on an approximate global Schur complement. We investigate two combinatorial problems to enhance PDSLin's performance at this step. The first is a(More)
To solve sparse linear systems multifrontal methods rely on dense partial LU decompositions of so-called frontal matrices; we consider a parallel, asynchronous setting in which several frontal matrices can be factored simultaneously. In this context, to address performance and scalability issues of acyclic pipelined asynchronous factorization kernels , we(More)