Learn More
Several compile time transforma­ tions of loops with simple dependencies have been devel­ oped in order to expose possible parallelism in these loops. However, once an irregular data dependence is detected, no attempt is usually made to extract any parallel thread from the loop. In this paper, we present the parallel region execution, a new compile time(More)
This paper is concerned with multiprocessor implementations of embedded applications specified as iterative dataflow programs, in which synchronization overhead tends to be significant. We develop techniques to alleviate this overhead by determining a minimal set of processor synchronizations that are essential for correct execution. Our study is based in(More)
Traditional dependence analysis techniques usually attempt to recognize the existence of dependencies between iterations of a loop and, in some cases, characterize these dependencies by finding direction vectors or distance vectors. In this paper, a more general form of data dependence called hyperplane dependence is introduced. It is a dependence whose(More)
This paper is concerned with multiprocessor implementations of embedded applications specified as iterative dataflow programs, in which synchronization overhead tends to be significant. We develop techniques to alleviate this overhead by determining a minimal set of processor synchronizations that are essential for correct execution. Our study is based in(More)
Performance information is essential to the design of efficient parallel programs. Whether the programmer has total control over determining the parallel execution of processes or whether some automatic means are used to parallelize serial or partially parallel code, the cost of the overhead due to parallelization must be known. This is especially important(More)
  • 1