Flowback analysis is a powerful technique for debugging programs. It allows the programmer to examine dynamic dependence in a program’s execution history without having to reexecute the program. The goal is to present to the programmer a graphical view of the dynamic program dependence. We are building a system, called PPD, that performs flowback analysis while keeping the execution time overhead low. We also extend the semantics of flowback analysis to parallel programs. This paper describes details of the graphs and algorithms needed to implement efficient flowback analysis for parallel programs. Execution-time overhead is kept low by recording only a small amount of trace during a program’s execution. We use semantic analysis and a technique called incremental tracing to keep the time and space overhead low. As part of the semantic analysis, PPD uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. Parallel programs have been accommodated in two ways. First, the flowback dependence can span process boundaries; that is, the most recent modification to a variable might be traced to a different process than that one that contains the current reference. The static dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program. Second, our algorithms will detect potential data-race conditions in the access to shared variables. The programmer can be directed to the cause of the race condition. PPD is currently being implemented for the C programming language on a Sequent Symmetry shared-memory multiprocessor.