#### Filter Results:

- Full text PDF available (12)

#### Publication Year

2011

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Patrick Amestoy, Iain S. Duff, Jean-Yves L'Excellent, Yves Robert, François-Henry Rouet, Bora Uçar
- SIAM J. Scientific Computing
- 2012

The inverse of an irreducible sparse matrix is structurally full, so that it is impractical to think of computing or storing it. However, there are several applications where a subset of the entries of the inverse is required. Given a factorization of the sparse matrix held in out-of-core storage, we show how to compute such a subset efficiently, by… (More)

- François-Henry Rouet, Xiaoye S. Li, Pieter Ghysels, Artem Napov
- ACM Trans. Math. Softw.
- 2016

In this report, we replicate a subset of the performance results in the article “A distributed-memory package for dense Hierarchically Semi-Separable matrix computations using randomization.”

- Marc Baboulin, Xiaoye S. Li, François-Henry Rouet
- VECPAR
- 2014

HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt età la diffusion… (More)

- Shen Wang, Xiaoye S. Li, François-Henry Rouet, Jianlin Xia, Maarten V. de Hoop
- ACM Trans. Math. Softw.
- 2016

We present a structured parallel geometry-based multifrontal sparse solver using hierarchically semiseparable (HSS) representations and exploiting the inherent low-rank structures. Parallel strategies for nested dissection ordering (taking low rankness into account), symbolic factorization, and structured numerical factorization are shown. In particular, we… (More)

- Kamer Kaya, François-Henry Rouet, Bora Uçar
- Euro-Par Workshops
- 2011

Hypergraph and graph partitioning tools are used to partition work for efficient parallelization of many sparse matrix computations. Most of the time, the objective function that is reduced by these tools relates to reducing the communication requirements, and the balancing constraints satisfied by these tools relate to balancing the work or memory… (More)

- Pieter Ghysels, Xiaoye S. Li, François-Henry Rouet, Samuel Williams, Artem Napov
- SIAM J. Scientific Computing
- 2016

We present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination , and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized… (More)

- Emmanuel Agullo, Patrick R. Amestoy, Alfredo Buttari, Abdou Guermouche, Jean-Yves L'Excellent, François-Henry Rouet
- SIAM J. Scientific Computing
- 2016

We focus on memory scalability issues in multifrontal solvers like MUMPS. We illustrate why commonly used mapping strategies (e.g., a proportional mapping) cannot achieve a high memory efficiency. We propose a class of " memory-aware " algorithms that aim at maximizing performance under memory constraints. These algorithms provide both accurate memory… (More)

- Patrick Amestoy, Iain S. Duff, Jean-Yves L'Excellent, François-Henry Rouet
- SIAM J. Scientific Computing
- 2015

To solve sparse linear systems multifrontal methods rely on dense partial LU decompositions of so-called frontal matrices; we consider a parallel, asynchronous setting in which several frontal matrices can be factored simultaneously. In this context, to address performance and scalability issues of acyclic pipelined asynchronous factorization kernels , we… (More)

- Emmanuel Agullo, Patrick R. Amestoy, +10 authors Ichitaro Yamazaki
- 2014

Direct methods for the solution of sparse systems of linear equations of the form A x = b are used in a wide range of numerical simulation applications. Such methods are based on the decomposition of the matrix into a product of triangular factors (e.g., A = L U), followed by triangular solves. They are known for their numerical accuracy and robustness but… (More)