Threaded Code Generation with a Meta-tracing JIT Compiler

  title={Threaded Code Generation with a Meta-tracing JIT Compiler},
  author={Yusuke Izawa and Hidehiko Masuhara and Carl Friedrich Bolz-Tereick and Youyou Cong},
  journal={J. Object Technol.},
Language implementation frameworks such as RPython and Truffle/Graal are effective tools for creating a high-performance language with lower effort than implementing from scratch. The two frameworks support only a single JIT compilation strategy, tracebased compilation and method-based compilation, but they have its own advantages and disadvantages. We proposed a meta-hybrid JIT compiler framework to take advantages of the two strategies as a language implementation framework. We also… 

Figures from this paper

Two-level Just-in-Time Compilation with One Interpreter and One Engine

This paper proposes a technique to realize two-level JIT compilation in RPython without implementing several interpreters or compilers from scratch, and created adaptive RPython, which performs both baseline Jit compilation based on threaded code and tracing JIT compiling.



Amalgamating different JIT compilations in a meta-tracing JIT compiler framework

This paper presents a new approach, namely, the meta-hybrid JIT compilation strategy that combines trace-based and method-based compilations to utilize the advantages of both strategies and performs a synthetic experiment to confirm that there are programs that run faster by hybrid compilation.

Tracing the meta-level: PyPy's tracing JIT compiler

This paper shows how to guide tracing JIT compilers to greatly improve the speed of bytecode interpreters, and how to unroll the bytecode dispatch loop, based on two kinds of hints provided by the implementer of thebytecode interpreter.

A trace-based Java JIT compiler retrofitted from a method-based compiler

This paper describes the design and implementation of a trace-JIT for Java developed from a production-quality method-based JIT compiler and shows the potentials of trace-based compilation as an alternative or complementary approach to compiling languages with mature method- based compilers.

Trace-based just-in-time type specialization for dynamic languages

This work presents an alternative compilation technique for dynamically-typed languages that identifies frequently executed loop traces at run-time and then generates machine code on the fly that is specialized for the actual dynamic types occurring on each path through the loop.

Tracing vs. partial evaluation: comparing meta-compilation approaches for self-optimizing interpreters

This study investigates both approaches in the context of self-optimizing interpreters, a technique for building fast abstract-syntax-tree interpreters and finds that tracing and partial evaluation both reach roughly the same level of performance.

Self-optimizing AST interpreters

This work presents a novel approach to implementing AST interpreters in which the AST is modified during interpretation to incorporate type feedback, which is a general and powerful mechanism to optimize many constructs common in dynamic programming languages.

PyPy's approach to virtual machine construction

The PyPy project seeks to prove both on a research and a practical level the feasibility of constructing a virtual machine (VM) for a dynamic language in a dynamic language - in this case, Python.

Improving the performance of trace-based systems by false loop filtering

False loop filtering is proposed, an approach to reject false loops in the repetition detection step of trace selection, and a technique called false loop filtering by call-stack-comparison, which rejects a cyclic path as a false loop if the call stacks at the beginning and the end of the cycle are different.

Improving Sequential Performance of Erlang Based on a Meta-tracing Just-In-Time Compiler

Pyrlang, an Erlang virtual machine with a just-in-time (JIT) compiler by applying an existing meta-tracing JIT compiler is developed, showing approximately 38% speedup over the standard Erlang interpreter.

Dynamo: a transparent dynamic optimization system

We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on