#### Filter Results:

- Full text PDF available (16)

#### Publication Year

2000

2016

- This year (0)
- Last 5 years (5)
- Last 10 years (7)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Barbara Kreaseck, Larry Carter, Henri Casanova, Jeanne Ferrante
- IPDPS
- 2003

In this paper we investigate protocols for scheduling applications that consist of large numbers of identical, independent tasks on large-scale computing platforms. By imposing a tree structure on an overlay network of computing nodes, our previous work showed that it is possible to compute the schedule which leads to the optimal steady-state task… (More)

- Michelle Mills Strout, Barbara Kreaseck, Paul D. Hovland
- 2006 International Conference on Parallel…
- 2006

Message passing via MPI is widely used in single-program, multiple-data (SPMD) parallel programs. Existing data-flow frameworks do not model the semantics of message-passing SPMD programs, which can result in less precise and even incorrect analysis results. We present a data-flow analysis framework for performing interprocedural analysis of message-passing… (More)

- Michelle Mills Strout, Larry Carter, Jeanne Ferrante, Barbara Kreaseck
- IJHPCA
- 2004

In modern computers, a program’s data locality can affect performance significantly. This paper details full sparse tiling, a run-time reordering transformation that improves the data locality for stationary iterative methods such as Gauss–Seidel operating on sparse matrices. In scientific applications such as finite element analysis, these iterative… (More)

Finite Element problems are often solved using multigrid techniques. The most time consuming part of multigrid is the iterative smoother, such as Gauss-Seidel. To improve performance, iterative smoothers can exploit parallelism, intra-iteration data reuse, and inter-iteration data reuse. Current methods for parallelizing Gauss-Seidel on irregular grids,… (More)

- Barbara Kreaseck, Larry Carter, Henri Casanova, Jeanne Ferrante
- 18th International Parallel and Distributed…
- 2004

Summary form only given. Overlapping communication with computation is a well-known technique to increase application performance. While it is commonly assumed that communication and computation can be overlapped at no cost, in reality, they do contend for resources and thus interfere with each other. Here we present an empirical quantification of the… (More)

- Michelle Mills Strout, Alan LaMielle, Larry Carter, Jeanne Ferrante, Barbara Kreaseck, Catherine Mills Olschanowsky
- Parallel Computing
- 2016

Applications that manipulate sparse data structures contain memory reference patterns that are unknown at compile time due to indirect accesses such as A[B[i]]. To exploit parallelism and improve locality in such applications, prior work has developed a number of run-time reordering transformations (RTRTs). This paper presents the Sparse Polyhedral… (More)

- Barbara Kreaseck, Luis Ramos, Scott Easterday, Michelle Mills Strout, Paul D. Hovland
- International Conference on Computational Science
- 2006

In forward mode Automatic Differentiation, the derivative program computes a function f and its derivatives, f ′. Activity analysis is important for AD. Our results show that when all variables are active, the runtime checks required for dynamic activity analysis incur a significant overhead. However, when as few as half of the input variables are inactive,… (More)

- Barbara Kreaseck, Dean M. Tullsen, Brad Calder
- ISHPC
- 2000

Tomorrow's microprocessors will be able to handle multiple flows of control. Applications that exhibit task level parallelism (TLP) and can be decomposed into parallel tasks will perform well on these platforms. TLP arises when a task is independent of its neighboring code. Traditional parallel compilers exploit one variety of TLP, loop level parallelism… (More)

Data-flow analyses that include some model of the data-flow between MPI sends and receives result in improved precision in the analysis results. One issue that arises with performing data-flow analyses on MPI programs is that the interprocedural control-flow graph ICFG is often irreducible due to call and return edges, and the MPI-ICFG adds further… (More)

- Barbara Kreaseck, Larry Carter, Henri Casanova, Jeanne Ferrante, Sagnik Nandy
- IJHPCA
- 2006

Overlapping communication with computation is a well-known technique to increase application performance. While it is commonly assumed that communication and computation can be overlapped at no cost, in reality they interfere with each other. In this paper we empirically evaluate the interference rate of communication on computation via measurements on a… (More)