A brief history of just-in-time

@article{Aycock2003ABH,
  title={A brief history of just-in-time},
  author={John Aycock},
  journal={ACM Comput. Surv.},
  year={2003},
  volume={35},
  pages={97-113}
}
  • John Aycock
  • Published 1 June 2003
  • Computer Science
  • ACM Comput. Surv.
Software systems have been using "just-in-time" compilation (JIT) techniques since the 1960s. Broadly, JIT compilation includes any translation performed dynamically, after a program has started execution. We examine the motivation behind JIT compilation and constraints imposed on JIT compilation systems, and present a classification scheme for such systems. This classification emerges as we survey forty years of JIT work, from 1960--2000. 

Figures and Tables from this paper

Infrastructures and Compilation Strategies for the Performance of Computing Systems
This document presents our main contributions to the field of compilation, and more generally to the quest of performance of computing systems. It is structured by type of execution environment,
Trace-based compilation and optimization in meta-circular virtual execution environments
TLDR
This dissertation explores an alternative approach in which only truly hot code paths are ever compiled, which compiles significantly less code and improves the performance of both statically and dynamically typed programming languages.
JIT through the ages Evolution of just-intime compilation from theoretical performance improvements to smartphone runtime and browser optimizations
This paper is a study on just-in-time compilation and traces its evolution from being a theoretical performance optimization to a technology that provides concrete speed-ups for constrained
Trace-based just-in-time compilation for lazy functional programming languages
TLDR
This thesis investigates the viability of trace-based just-in-time (JIT) compilation for optimising programs written in the lazy functional programming language Haskell and implemented Lambdachine, a trace- based JIT compiler which implements most of the pure subset of Haskell.
Generalized just-in-time trace compilation using a parallel task farm in a dynamic binary translator
TLDR
An industry-strength, LLVM-based parallel DBT implementing the ARCompact ISA is evaluated against three benchmark suites and speedups of up to 2.08 on a standard quad-core Intel Xeon machine are demonstrated.
OCamlJIT 2.0 - Faster Objective Caml
TLDR
The current state of an ongoing research project to improve the performance of the OCaml byte-code interpreter using Just-In-Time native code generation is presented, its design and implementation is described, and performance measures are given.
Hybrid Java Compilation of Just-in-Time and Ahead-of Time for Embedded Systems
TLDR
This paper proposes a hybrid Java compilation approach and performs an initial case study with a hybrid environment, which is constructed simply by merging an existing AOTC and a JITC for the same Java virtual machine.
MicroJIT: a lightweight, just-in-time compiler to improve startup times
TLDR
Experimental results show that enabling MicroJIT, compared to the default configuration, significantly reduces the startup time of the considered bench-tests, and this thesis investigates whether using two different JIT compilers in the same JVM can improve startup time.
A Sampling Profiler for a JIT Compiler
TLDR
The use of a sampling profiler to monitor native code without instrumentation is proposed and based on the collected profiles it is able to detect when the native code produced by Ř is specialized for stale type information and trigger recompilation for more specific type information.
Compiler Optimizations for a Time-constrained Environment
TLDR
A new analysis is presented, memory dependence analysis, which amortizes the cost of performing memory access analysis to a level that is acceptable for dynamic compilation, and presents two memory access optimizations based on this new analysis.
...
...

References

SHOWING 1-10 OF 136 REFERENCES
Does “just in time” = “better late than never”?
TLDR
A new method is introduced--the continuous compiler-- that can outperform just-in-time systems by overlapping compilation with program interpretation and native execution, thereby obtaining improved performance.
The Wonder Years of Sequential Prolog Implementation
TLDR
This report surveys the major developments in sequential Prolog implementation during the period 1983-1993 and extrapolates current trends regarding the evolution of sequential logic languages, their implementation, and their role in the marketplace.
Fast, effective code generation in a just-in-time Java compiler
TLDR
The structure of a Java JIT compiler for the Intel Architecture is presented, the lightweight implementation of JIT compilation optimizations are described, and the performance benefits and tradeoffs of the optimizations are evaluated.
The Dynamic Incremental Compiler of APL\3000
TLDR
APL\3000 employs a Dynamic Incremental Compiler to allow all the flexibility of change afforded by interpretation, but giving the added bonus of faster execution for programs run more than once.
Making pure object-oriented languages practical
TLDR
Two new optimization techniques, deferred compilation of uncommon cases and non-backtracking splitting using path objects, have improved compilation speed by more than an order of magnitude and may make pure object-oriented languages practical.
A general approach for run-time specialization and its application to C
TLDR
This paper describes a general approach to run-time specialization that automatically produces source templates at compile time, and transforms them so that they can be processed by a standard compiler, and is efficient, as shown by the implementation for the C language.
Design and Implementation of Pep, A Java Just-in-Time Translator
  • Ole Agesen
  • Computer Science
    Theory Pract. Object Syst.
  • 1997
TLDR
Pep, a just-in-time compiler from Java bytecodes to Self, and following translation by Pep, Java programs can execute on the Self virtual machine and benefit from the optimizations performed by Self's compiler.
Daisy: Dynamic Compilation For 10o?40 Architectural Compatibility
  • K. EbciogluE. Altman
  • Computer Science
    Conference Proceedings. The 24th Annual International Symposium on Computer Architecture
  • 1997
TLDR
The architectural requirements for such a VLIW, to deal with issues including self-modifying code, precise exceptions, and aggressive reordering of memory references in the presence of strong MP consistency and memory mapped I/O are discussed.
Adaptive systems for the dynamic run-time optimization of programs.
TLDR
This thesis investigates adaptive compiler systems that perform code optimizations based on the dynamic behavior of the program based on a fixed code generation strategy, i.e., one in which a predetermined set of code optimizations are applied at compile-time to an entire program.
...
...