SPUR: a trace-based JIT compiler for CIL

@article{Bebenita2010SPURAT,
  title={SPUR: a trace-based JIT compiler for CIL},
  author={Michael Bebenita and Florian Brandner and Manuel Fahndrich and Francesco Logozzo and Wolfram Schulte and Nikolai Tillmann and Herman Venter},
  journal={Proceedings of the ACM international conference on Object oriented programming systems languages and applications},
  year={2010}
}
  • Michael Bebenita, F. Brandner, H. Venter
  • Published 17 October 2010
  • Computer Science
  • Proceedings of the ACM international conference on Object oriented programming systems languages and applications
Tracing just-in-time compilers (TJITs) determine frequently executed traces (hot paths and loops) in running programs and focus their optimization effort by emitting optimized machine code specialized to these traces. Prior work has established this strategy to be especially beneficial for dynamic languages such as JavaScript, where the TJIT interfaces with the interpreter and produces machine code from the JavaScript trace. This direct coupling with a JavaScript interpreter makes it difficult… 

Figures from this paper

Meta-tracing makes a fast Racket
TLDR
The result of spending just a couple person-months implementing and tuning an implementation of Racket written in RPython is presented, with a geometric mean equal to Racket’s performance and within a factor of 2 slower than Gambit and Larceny on a collection of standard Scheme benchmarks.
Trace-based compilation for the Java HotSpot virtual machine
TLDR
This paper presents the implementation of a trace-based JIT compiler in which the mature, method-based Java HotSpot client compiler is modified and a bytecode preprocessing step is added that detects and directly marks loops within the bytecodes to simplify trace recording.
Trace transitioning and exception handling in a trace-based JIT compiler for java
TLDR
A significantly enhanced trace-based compiler where arbitrary transitions between interpreted and compiled traces are possible and suitable trace calling conventions are introduced and exception handling is extended to work both within traces and across trace boundaries.
A flexible framework for studying trace-based just-in-time compilation
The essence of compiling with traces
TLDR
This paper presents a framework for reasoning about the soundness of trace optimizations, and shows that some traditional optimization techniques are sound when used in a trace compiler while others are unsound.
On-stack replacement for program generators and source-to-source compilers
TLDR
This paper presents a surprisingly simple pattern for implementing OSR in source-to-source compilers or explicit program generators that target languages with structured control flow (loops and conditionals).
Study on method-based and trace-based just-in-time compilation for scripting languages
TLDR
This thesis proposes two JIT compilers for the implementation of scripting languages and describes RuJIT, a trace-based JIT compiler for Ruby, which traces program code to determine frequently executed traces in running programs and emits optimized machine code specialized to these traces.
Brussel A flexible framework for studying trace-based just-intime compilation
TLDR
STRAF is a minimalistic yet flexible Scala framework for studying trace-based JIT compilation that is sufficiently general to support a diverse set of language interpreters, but also sufficiently extensible to enable experiments with trace recording and optimization.
Evaluating Call Graph Construction for JVM-hosted Language Implementations
TLDR
This work presents qualitative and quantitative analysis of the soundness and precision of call graphs constructed from JVM bytecodes produced for Python, Ruby, Clojure, Groovy, Scala, and OCaml applications and shows that all unsoundness comes from rare, complex uses of reflection and proxies, and the translation of first-class features in Scala incurs a significant loss of precision.
On the benefits and pitfalls of extending a statically typed language JIT compiler for dynamic scripting languages
TLDR
This work offers the first in-depth look at benefits and limitations of the repurposed JIT compiler approach, and believes the most common pitfall of existing RJIT compilers is not focusing sufficiently on specialization, an abundant optimization opportunity unique to dynamically typed languages.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
YETI: a graduallY extensible trace interpreter
TLDR
This paper describes how callable bodies help the Yeti interpreter to efficiently identify and run traces, and how the closely coupled dynamic compiler can fall back on the interpreter in various ways, permitting an incremental approach.
Faster than C#: efficient implementation of dynamic languages on .NET
TLDR
The main and novel contribution of this paper is to show that this two-layers JIT technique is effective, since programs written in dynamic languages can run on .NET as fast as (and in some cases even faster than) the equivalent C# programs.
Tracing the meta-level: PyPy's tracing JIT compiler
TLDR
This paper shows how to guide tracing JIT compilers to greatly improve the speed of bytecode interpreters, and how to unroll the bytecode dispatch loop, based on two kinds of hints provided by the implementer of thebytecode interpreter.
Optimization of dynamic languages using hierarchical layering of virtual machines
TLDR
This work explores the approach of taking an interpreter of a dynamic language and running it on top of an optimizing trace-based virtual machine, i.e., the authors run a guest VM onTop of a host VM, thus eliminating the need for a custom just-in-time compiler for the guest VM.
Trace-based just-in-time type specialization for dynamic languages
TLDR
This work presents an alternative compilation technique for dynamically-typed languages that identifies frequently executed loop traces at run-time and then generates machine code on the fly that is specialized for the actual dynamic types occurring on each path through the loop.
Trace fragment selection within method-based JVMs
TLDR
This paper uses the "interpreterless" Jikes RVM as a foundation, and uses the trace profiling subsystem to identify an application's working set as a collection of hot traces and shows that there is a significant margin for improvement in instruction ordering that can be addressed by trace execution.
Tracing for web 3.0: trace compilation for the next generation web applications
TLDR
A trace-based just-in-time compiler for JavaScript that uses run-time profiling to identify frequently executed code paths, which are compiled to executable machine code.
PyPy's approach to virtual machine construction
The PyPy project seeks to prove both on a research and a practical level the feasibility of constructing a virtual machine (VM) for a dynamic language in a dynamic language - in this case, Python.
Efficient Just-InTime Execution of Dynamically Typed Languages Via Code Specialization Using Precise Runtime Type Inference
TLDR
This work presents a new approach to compiling dynamically typed languages in which code traces observed during execution are dynamically specialized for each actually observed run-time type.
Stream-Based Dynamic Compilation for Object-Oriented Languages
TLDR
A new software architecture for dynamic compilers in which the granularity of compilation steps is much finer, forming a “pipeline” with completely linear runtime behavior, and in which there are only two write barriers is described.
...
1
2
3
4
...