Induction Variable Analysis without Idiom Recognition: Beyond Monotonicity

@inproceedings{Wu2001InductionVA,
  title={Induction Variable Analysis without Idiom Recognition: Beyond Monotonicity},
  author={Peng Wu and Albert Cohen and David A. Padua},
  booktitle={LCPC},
  year={2001}
}
Traditional induction variable (IV) analyses focus on computing the closed form expressions of variables. This paper presents a new IV analysis based on a property called distance interval. This property captures the value changes of a variable along a given control-flow path of a program. Based on distance intervals, an efficient algorithm detects dependences for array accesses that involve induction variables. This paper describes how to compute distance intervals and how to compute closed… 

Induction Variable Analysis with Delayed Abstractions

TLDR
The design of an induction variable analyzer suitable for the analysis of typed, low-level, three address representations in SSA form is presented, with a new algorithm recognizing scalar evolutions.

Scalable conditional induction variables (CIV) analysis

  • C. OanceaL. Rauchwerger
  • Computer Science
    2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)
  • 2015
TLDR
A flow-sensitive technique that summarizes both CIV-based and affine subscripts to program level, using the same representation, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references.

Analysis of induction variables using chains of recurrences: exten-sions

TLDR
The static analysis of the evolution of scalar variables in the loop structures of imperative programs is described using a classic data flow algorithm, and refined into a more efficient algorithm using the static single assignment intermediate representation of a program.

Difference constraints: an adequate abstraction for complexity analysis of imperative programs

TLDR
It is argued that the complexity of imperative programs typically arises from counter increments and resets, which can be modeled naturally by difference constraints, and the first practical algorithm for the analysis of difference constraint programs is presented.

The value evolution graph and its use in memory reference analysis

  • S. RusDongmin ZhangL. Rauchwerger
  • Computer Science
    Proceedings. 13th International Conference on Parallel Architecture and Compilation Techniques, 2004. PACT 2004.
  • 2004
We introduce a framework for the analysis of memory reference sets addressed by induction variables without closed forms. This framework relies on a new data structure, the value evolution graph

Sensitivity analysis for automatic parallelization on multi-cores

TLDR
It is shown how Sensitivity Analysis can extract all the input dependent, statically unavailable, conditions for which loops can be dynamically parallelized and obtain most of the available coarse grained parallelism.

Effective compile-time analysis for data prefetching in java

TLDR
This dissertation shows that a unified whole-program compiler analysis is effective in discovering prefetching opportunities in Java programs that traverse arrays and linked structures, and improves the memory performance even in the presence of object-oriented features that complicate analysis.

Simple and effective array prefetching in Java

TLDR
A new unified compile-time analysis for software prefetching arrays and linked structures in Java is described that identifies loop induction variables used in array accesses and is suitable for including in a just-in-time compiler.

Sensitivity Analysis for Migrating Programs to Multi-Cores

TLDR
The Sensitivity Analysis (SA), presented in this paper is a novel technique that can reduce the dynamic overhead of previous Hybrid Analysis technology, and can frequently extract light weight, sufficient conditions for which these dependence sets are empty.

Scalable Array SSA and Array Data Flow Analysis

TLDR
This paper proposes to improve the applicability of previous efforts in array SSA through the use of a symbolic memory access descriptor that can aggregate the accesses to the elements of an array over large, interprocedural program contexts.

References

SHOWING 1-10 OF 12 REFERENCES

Monotonic evolution: an alternative to induction variable substitution for dependence analysis

TLDR
Experimental results show that dependence tests based on evolution information matches the accuracy of that based on closed-form computation (implemented in Polaris), and when no closed form expressions can be calculated, the method is more accurate than that of Polaris.

Analyses of pointers, induction variables, and container objects for dependence testing

TLDR
This thesis presents a pointer analysis that accurately models containers, iterators, and container-element connections, an extension of Sagiv, Reps and Wilhelm's shape analysis for destructive updating and an induction variable analysis that exploits IV information without closed form computation.

Automatic recognition of induction variables and recurrence relations by abstract interpretation

TLDR
This paper puts forth a systematic method for recognizing recurrence relations automatically, which is easily extensible by the addition of templates, and is able to recognize nested recurrences by the propagation of the closed forms of recurrence from inner loops.

The range test: a dependence test for symbolic, non-linear expressions

TLDR
The range test proves independence by determining whether certain symbolic inequalities hold for a permutation of the loop nest, and has been implemented in Polaris, a parallelizing compiler being developed at the University of Illinois.

Parallelization in the Presence of Generalized Induction and Reduction Variables

TLDR
The elimination of induction variables and the parallelization of reductions in FORTRAN programs has been shown to be integral to performance improvement on parallel computers and compiler passes that recognize these idioms have been implemented and evaluated.

Symbolic analysis for parallelizing compilers

TLDR
A methodology for capturing analyzing program properties that are essential in the effective detection and efficient exploitation of parallelism on parallel computers is described and a symbolic analysis framework is developed for the Parafrase-2 parallelizing compiler.

Efficiently computing static single assignment form and the control dependence graph

TLDR
New algorithms that efficiently compute static single assignment form and control dependence graph data structures for arbitrary control flow graphs are presented and it is given that all of these data structures are usually linear in the size of the original program.

Compiler analysis of irregular memory accesses

TLDR
Two kinds of simple and common cases of irregular array accesses are studied: single-indexed access and indirect array access, and techniques to analyze these two cases at compile-time are presented.

Pointer Analysis for Monotonic Container TraversalsAlbert

TLDR
This work captures aliasing properties through dedicated points-to graphs in Java to achieve precise memory disambiguations at a reasonable cost, with applications to parallelization and optimization.

Parallel Programming with Polaris

TLDR
Polaris, an experimental translator of conventional Fortran programs that target machines such as the Cray T3D, is discussed, which would liberate programmers from the complexities of explicit, machine oriented parallel programming.