Trace-based just-in-time type specialization for dynamic languages

@inproceedings{Gal2009TracebasedJT,
  title={Trace-based just-in-time type specialization for dynamic languages},
  author={Andreas Gal and Brendan Eich and Mike Shaver and David Anderson and David Mandelin and Mohammad R. Haghighat and Blake Kaplan and Graydon Hoare and Boris Zbarsky and Jason Orendorff and Jesse Ruderman and Edwin W. Smith and Rick Reitmaier and Michael Bebenita and Mason Chang and Michael Franz},
  booktitle={PLDI '09},
  year={2009}
}
Dynamic languages such as JavaScript are more difficult to compile than statically typed ones. Since no concrete type information is available, traditional compilers need to emit generic code that can handle all possible type combinations at runtime. We present an alternative compilation technique for dynamically-typed languages that identifies frequently executed loop traces at run-time and then generates machine code on the fly that is specialized for the actual dynamic types occurring on… 
Portable Just-in-Time Specialization of Dynamically Typed Scripting Languages
TLDR
The level of instruction specialization achieved by the profiling scheme as well as the overall performance of the JIT are evaluated.
Trace-based compilation and optimization in meta-circular virtual execution environments
TLDR
This dissertation explores an alternative approach in which only truly hot code paths are ever compiled, which compiles significantly less code and improves the performance of both statically and dynamically typed programming languages.
Code Versioning and Extremely Lazy Compilation of Scheme
TLDR
LC, a Scheme compiler which implements this code generation approach that generates multiple versions of the code to specialize it to the types that are observed at execution time by using an extremely lazy compiler is described.
Stream-Based Dynamic Compilation for Object-Oriented Languages
TLDR
A new software architecture for dynamic compilers in which the granularity of compilation steps is much finer, forming a “pipeline” with completely linear runtime behavior, and in which there are only two write barriers is described.
Removing checks in dynamically typed languages through efficient profiling
TLDR
This paper presents a HW/SW hybrid mechanism that allows the removal of checks executed in the optimized code by performing a HW profiling of the types of object variables.
Dynamic interpretation for dynamic scripting languages
TLDR
This paper presents a novel intermediate representation for scripting languages that explicitly encodes types of variables in a flow graph, where each node is a specialized virtual instruction and each edge directs program flow based on control and type changes in the program.
Self-optimizing AST interpreters
TLDR
This work presents a novel approach to implementing AST interpreters in which the AST is modified during interpretation to incorporate type feedback, which is a general and powerful mechanism to optimize many constructs common in dynamic programming languages.
Meta-Tracing Just-in-Time Compilation for RPython
TLDR
Meta-tracing is flexible enough to incorporate typical JIT techniques such as run-time type feedback and unboxing, making the implementation of high-performance virtual machines for dynamic languages significantly simpler, making these languages more practical in a wider context.
Analysis and Optimization of Engines for Dynamically Typed Languages
TLDR
This paper provides a detailed analysis of the two main sources of overhead in the Java Script execution: runtime overhead needed for dynamic compilation and housekeeping activities and three new HW/SW optimizations that reduce this latter type of overhead.
Simple and Effective Type Check Removal through Lazy Basic Block Versioning
TLDR
Lazy basic block versioning is introduced, a simple JIT compilation technique which effectively removes redundant type checks from critical code paths and is compared with a classical flow-based type analysis.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 26 REFERENCES
Customization: optimizing compiler technology for SELF, a dynamically-typed object-oriented programming language
TLDR
Coupling these new techniques with compile-time message lookup, aggressive procedure inlining, and traditional optimizations has doubled the performance of dynamically-typed object-oriented languages.
A Concurrent Trace-based Just-InTime Compiler for JavaScript
TLDR
This paper presents the design and implementation of a concurrent trace-based JIT that uses novel lock-free synchronization to trace, compile, install, and stitch traces on a separate core such that the interpreter essentially never needs to pause.
HotpathVM: an effective JIT compiler for resource-constrained devices
TLDR
A just-in-time compiler for a Java VM that is small enough to fit on resource-constrained devices, yet is surprisingly effective, and benchmarks show a speedup that in some cases rivals heavy-weight just- in-time compilers.
YETI: a graduallY extensible trace interpreter
TLDR
This paper describes how callable bodies help the Yeti interpreter to efficiently identify and run traces, and how the closely coupled dynamic compiler can fall back on the interpreter in various ways, permitting an incremental approach.
Measurement and Application of Dynamic Receiver Class Distributions
TLDR
This work applies dynamic profile information to determine the dynamic execution frequency distributions of the classes of receivers at call sites and shows that these distributions are heavily skewed towards the most commonly occurring receiver class across several different languages.
Efficient bytecode verification and compilation in a virtual machine
TLDR
The proposed prototype virtual machine can generate code 350 times faster than existing desktop VMs, and is 30 times more memory efficient, yet achieves execution performance that is much higher than that of existing embedded VMs and closer to the performance of heavyweight desktop systems.
Optimizing direct threaded code by selective inlining
TLDR
It is demonstrated that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations.
Starkiller: A Static Type Inferencer and Compiler for Python
TLDR
Early numeric benchmarks show that Starkiller compiled code performs almost as well as hand made C code and substantially better than alternative Python compilers.
Representation-based just-in-time specialization and the psyco prototype for python
  • A. Rigo
  • Computer Science
    PEPM '04
  • 2004
TLDR
The Psyco prototype for the Python language is presented, and the just-in-time specialization, or specialization by need, which introduces the "unlifting" ability for a value to be promoted from run-time to compile-time during specialization -- the inverse of the lift operator of partial evaluation.
Dynamo: a transparent dynamic optimization system
We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on
...
1
2
3
...