Efficiently computing static single assignment form and the control dependence graph

@article{Cytron1991EfficientlyCS,
  title={Efficiently computing static single assignment form and the control dependence graph},
  author={Ronald Gary Cytron and Jeanne Ferrante and Barry K. Rosen and Mark N. Wegman and F. Kenneth Zadeck},
  journal={ACM Trans. Program. Lang. Syst.},
  year={1991},
  volume={13},
  pages={451-490}
}
In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and… Expand
Future value based single assignment program representations and optimizations
TLDR
This dissertation explores the domain of single assignment beyond SSA, and presents two novel program representations: future value concept, the designing base of both FGSA and RFPF, which permits a consumer instruction to be encountered before the producer of its source operand in a control flow setting. Expand
Transformation to Dynamic Single Assignment Using a Simple Data Flow Analysis
TLDR
This paper presents a novel method to construct a dynamic single assignment (DSA) form of array-intensive, pointer-free C programs and overcomes a number of important limitations of existing methods. Expand
Efficiently Building the Gated Single Assignment Form in Codes with Pointers in Modern Optimizing Compilers
TLDR
A simple and fast GSA construction algorithm that takes advantage of the infrastructure for building the SSA form available in modern optimizing compilers, and an implementation on top of the GIMPLE-SSA intermediate representation of GCC is described and evaluated. Expand
Optimizing compilation with the value state dependence graph
TLDR
It is demonstrated how effective control flow can be reconstructed from just the dataflow information comprising the VSDG, and it is concluded that it is now practical to discard the control flow information rather than maintain it in parallel as is done in many previous approaches. Expand
A practical dynamic single assignment transformation
TLDR
This paper presents a novel method to construct a dynamic single assignment (DSA) form of array intensive, pointer free C programs that scales very well with growing program sizes and overcomes a number of important limitations of existing methods. Expand
Static Single Assignment Form for Explicitly Parallel Programs: Theory and Practice
To sensibly reason about parallel programs, a coherent intermediate form needs to be developed. We describe and prove correctness and safety of algorithms to convert programs that use the ParallelExpand
The program structure tree: computing control regions in linear time
TLDR
A linear-time algorithm for finding SESE regions and for building the PST of arbitrary control flow graphs (including irreducible ones) is given and it is shown how to use the algorithm to find control regions in linear time. Expand
Utilizing the Value State Dependence Graph for Haskell
TLDR
The goal of this thesis is to make the Value State Dependence Graph applicable for Haskell by equipping the GHC compiler with a proof-of-concept back-end that facilitates the use of it. Expand
Algorithms for computing the static single assignment form
TLDR
This article proposes a framework within which properties of the SSA form and φ-placement algorithms are derived, based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSAform. Expand
Code size optimization for embedded processors
TLDR
This thesis develops the Value State Dependence Graph as a powerful intermediate form and shows how procedural abstraction can be advantageously applied to the VSDG, and presents a method for using these instructions to reduce code size by provisionally combining loads and stores before code generation. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 81 REFERENCES
The program Dependence Graph and its Use in Optimization
TLDR
An intermediate program representation, called a program dependence graph or PDG, which summarizes not only the data dependences of each operation but also summarizes the control dependence of the operations, which allows transformations such as vectorization to be performed in a manner which is uniform for both data and control dependence. Expand
Flow analysis and optimization of LISP-like structures
TLDR
Methods for determining the class of shapes which an unbounded data object may assume during execution of a LISP-like program are provided, and a number of uses to which that information may be put to improve storage allocation in compilers and interpreters for advanced programming languages are described. Expand
The program dependence graph and its use in optimization
TLDR
An intermediate program representation, called the program dependence graph (PDG), that makes explicit both the data and control dependences for each operation in a program, allowing transformations to be triggered by one another and applied only to affected dependences. Expand
Dependence analysis for pointer variables
TLDR
A family of algorithms is defined that compute safe approximations to the flow, output, and anti-dependencies of a program written in a programming language with pointer variables, particularly for languages that manipulate heap-allocated storage. Expand
Dependence analysis for subscripted variables and its application to program transformations
TLDR
The order in which the statements of a program are executed can affect the execution time of the program, and this fact has made use of this fact by reordering code to improve its performance on a machine. Expand
Data Flow Analysis for Procedural Languages
TLDR
A language independent formulation of the problem, an interprocedural data flow algorithm, and a proof that the algorithm is correct are included, and several widespread assumptions become false or ambiguous. Expand
Interprocedural data flow analysis in a programming environment
TLDR
This thesis examines three problems arising in the construction of an ambitious optimizing compiler based in a programming environment, the analysis of aliasing patterns, computation of summary data flow information, and assignment of linkage styles to call sites. Expand
Code motion of control structures in high-level languages
TLDR
Although significant power is gained through the use of layered abstractions, object code quality suffers as increasingly less of a program's data structures and operations are exposed to the optimization phase of a compiler. Expand
A portable machine-independent global optimizer--design and measurements
TLDR
This dissertation addresses the topic of portable and machine-independent program optimization on a standard, well-defined intermediate code and confirms the advantages of using portable machine- independent optimization in a retargetable compiler system. Expand
Compiling C for vectorization, parallelization, and inline expansion
TLDR
The present paper discusses the application of a much studied body of algorithms and techniques for vectorizing and optimizing Fortran to the problem of vectorize and optimizing C, and gives insight into the strengths and weaknesses of the current theory, as well as into the strong and weak points of C on vector/parallel machines. Expand
...
1
2
3
4
5
...