Allocation removal by partial evaluation in a tracing JIT

@inproceedings{BolzTereick2011AllocationRB,
  title={Allocation removal by partial evaluation in a tracing JIT},
  author={Carl Friedrich Bolz-Tereick and Antonio Cuni and Maciej Fijalkowski and Michael Leuschel and Samuele Pedroni and Armin Rigo},
  booktitle={PEPM '11},
  year={2011}
}
The performance of many dynamic language implementations suffers from high allocation rates and runtime type checks. This makes dynamic languages less applicable to purely algorithmic problems, despite their growing popularity. In this paper we present a simple compiler optimization based on online partial evaluation to remove object allocations and runtime type checks in the context of a tracing JIT. We evaluate the optimization using a Python VM and find that it gives good results for all our… 

Figures from this paper

Runtime feedback in a meta-tracing JIT for efficient dynamic languages
TLDR
The mechanisms in PyPy's meta-tracing JIT that can be used to control runtime feedback in language-specific ways are described, which are flexible enough to express classical VM techniques such as maps and runtime type feedback.
Memory Allocation and Access Patterns in Dynamic Languages
TLDR
The results indicate that if the interpreter offers fast allocations by using a modern garbage collector, nearly no speedups can be achieved and in some cases the performance using integer tagging even decreases.
Simple and Effective Type Check Removal through Lazy Basic Block Versioning
TLDR
Lazy basic block versioning is introduced, a simple JIT compilation technique which effectively removes redundant type checks from critical code paths and is compared with a classical flow-based type analysis.
Interprocedural Specialization of Higher-Order Dynamic Languages Without Static Analysis
TLDR
This paper presents a JIT compilation technique enabling function duplication in the presence of higher order functions, and shows that the technique can be used to duplicate functions using other run time information opening up new applications such as register allocation based duplication and aggressive inlining.
The efficient handling of guards in the design of RPython's tracing JIT
TLDR
An empirical analysis of runtime properties of guards is performed to guide the design of guards in the RPython tracing JIT.
Language-independent storage strategies for tracing-JIT-based virtual machines
TLDR
This paper presents a general design and implementation for storage strategies and shows how they can be reused across different RPython-based languages and evaluates the generality of the implementation by applying it to Topaz, a Ruby VM, and Pycket, a Racket implementation.
Interprocedural Type Specialization of JavaScript Programs Without Type Analysis
TLDR
The implementation in a JavaScript JIT compiler shows that across 26 benchmarks, interprocedural basic block versioning eliminates more type tag tests on average than what is achievable with static type analysis without resorting to code transformations.
Loop-aware optimizations in PyPy's tracing JIT
TLDR
This paper explains a scheme pioneered within the context of the LuaJIT project for making basic optimizations loop-aware by using a simple pre-processing step on the trace without changing the optimizations themselves, and implements it in RPython's tracing JIT compiler.
Adaptive just-in-time value class optimization: transparent data structure inlining for fast execution
TLDR
This paper presents a technique to detect and compress commonly occurring patterns of value class usage to improve memory usage and performance and shows two to ten-fold speedup of a small prototypical implementation over the implementation of value classes in other object-oriented language implementations.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 33 REFERENCES
HotpathVM: an effective JIT compiler for resource-constrained devices
TLDR
A just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet is surprisingly effective, and benchmarks show a speedup that in some cases rivals heavy-weight just- in-time compilers.
Higher Order Escape Analysis: Optimizing Stack Allocation in Functional Program Implementations
TLDR
This method is based on escape analysis, an application of abstraction interpretation to higher order functional languages to determine when arguments can be stack allocated safely.
Incremental Dynamic Code Generation with Trace Trees
TLDR
This work explores trace-based compilation, in which the unit of compilation is a loop, potentially spanning multiple methods and even library code, and generates code that is competitive with traditional dynamic compilers, but that uses only a fraction of the compile time and memory footprint.
Trace-based just-in-time type specialization for dynamic languages
TLDR
This work presents an alternative compilation technique for dynamically-typed languages that identifies frequently executed loop traces at run-time and then generates machine code on the fly that is specialized for the actual dynamic types occurring on each path through the loop.
SPUR: a trace-based JIT compiler for CIL
TLDR
A TJIT for Microsoft's Common Intermediate Language CIL is designed and implemented that enables TJIT optimizations for any program compiled to this platform and provides a performance evaluation of the JavaScript runtime which translates JavaScript to CIL and then runs on top of the CIL TJIT.
Tracing the meta-level: PyPy's tracing JIT compiler
TLDR
This paper shows how to guide tracing JIT compilers to greatly improve the speed of bytecode interpreters, and how to unroll the bytecode dispatch loop, based on two kinds of hints provided by the implementer of thebytecode interpreter.
Towards a jitting VM for prolog execution
TLDR
It is shown that declarative languages such as Prolog can indeed benefit from having a just-in-time compiler and that PyPy can form the basis for implementing programming languages other than Python.
Escape analysis for JavaTM: Theory and practice
TLDR
This paper presents the design and correctness proof of an escape analysis for JavaTM, which uses integers to represent the escaping parts of values, and introduces a new method to prove the correctness of this analysis, using aliases as an intermediate step.
Dynamo: a transparent dynamic optimization system
We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on
Escape analysis for Java
TLDR
A new program abstraction for escape analysis, the connection graph, that is used to establish reachability relationships between objects and object references is introduced and it is shown that the connectiongraph can be summarized for each method such that the same summary information may be used effectively in different calling contexts.
...
1
2
3
4
...