A Parallel Sparse Direct Solver via Hierarchical DAG Scheduling
Parallel programming is commonly done through a library approach, as in the Message Passing Interface (MPI), directives, as in OpenMP, language extensions, as in High Performance Fortran (HPF), or whole new languages, as in Chapel. However, we argue that the concepts underlying these different programming systems show great commonality. Hence, we propose a Domain-Specific Language (DSL) that expresses an abstraction of these common concepts. As we show by means of a prototype that uses both MPI and OpenMP tasks as backend, this common vocabulary can then be expressed in multiple parallelism types.