Apostolos Gerasoulis

Learn More
Clustering of task graphs has been used as an intermediate step toward scheduling parallel architectures. In this paper, we identify important characteristics of clustering algorithms and propose a general framework for analyzing and evaluating such algorithms. Using this framework, we present an analytic performance comparison of four algorithms: Dominant(More)
We describe a parallel programming tool for scheduling static task graphs and generating the appropriate target code for message passing MIMD architectures. The computational complexity of the system is almost linear to the size of the task graph and preliminary experiments show performance comparable to the “best” hand-written programs.
How often does the search engine of your choice produce results that are less than satisfying, generating endless links to irrelevant pages even though those pages may contain the query keywords? How often are you given pages that tell you things you already know? While the search engines and related tools continue to make improvements in their information(More)
This paper addresses the problem of scheduling iterative task graphs on distributed memory architec-tures with nonzero communication overhead. The proposed algorithm incorporates techniques of software pipelining, graph unfolding and directed acyclic graph scheduling. The goal of optimization is to minimize overall parallel time, which is achieved by(More)
Clustering is a mapping of the nodes of a task graph onto labeled clusters. We present a unified framework for clustering of directed acyclic graphs (DAGs). Several clustering algorithms from the literature are compared using this framework. For coarse grain DAGs two interesting properties are presented. For every nonlinear clustering there exists a linear(More)