LLVM: a compilation framework for lifelong program analysis & transformation

@article{Lattner2004LLVMAC,
  title={LLVM: a compilation framework for lifelong program analysis \& transformation},
  author={Chris Lattner and V. Adve},
  journal={International Symposium on Code Generation and Optimization, 2004. CGO 2004.},
  year={2004},
  pages={75-86}
}
  • Chris Lattner, V. Adve
  • Published 2004
  • Computer Science
  • International Symposium on Code Generation and Optimization, 2004. CGO 2004.
We describe LLVM (low level virtual machine), a compiler framework designed to support transparent, lifelong program analysis and transformation for arbitrary programs, by providing high-level information to compiler transformations at compile-time, link-time, run-time, and in idle time between runs. LLVM defines a common, low-level code representation in static single assignment (SSA) form, with several novel features: a simple, language-independent type-system that exposes the primitives… Expand
From ASTs to Machine Code with LLVM
A compiler is a program that translates source code written in a particular language into another language. Internally, the whole process is typically split into multiple stages that handle oneExpand
A LLVM Extension for the Generation of Low Overhead Runtime Program Specializer
TLDR
This paper introduces an LLVM extension that aims to generate low-overhead runtime program specializers and uses dedicated passes and a modified back-end to generate a specialized code generator, removing the need to manipulate any IR at run-time. Expand
Compiling with Continuations and LLVM
TLDR
A new LLVM-based backend that supports heap-allocated continuation closures, which enables constant-time callcc and very-lightweight multithreading and should be useful for other compilers, such as Standard ML of New Jersey, that use heap- allocate continuation closures. Expand
Instrew: leveraging LLVM for high performance dynamic binary instrumentation
TLDR
A novel dynamic binary instrumentation framework, Instrew, which closes gaps by leveraging the LLVM compiler infrastructure for high-quality code optimization and generation and (b) enables process isolation between the target code and the instrumenter. Expand
Simple optimizing JIT compilation of higher-order dynamic programming languages
TLDR
This work proposes a new approach and new techniques to build optimizing just-in-time compilers for dynamic languages with relatively good performance and low development effort and presents the experience of building a JIT compiler using these techniques for the Scheme language. Expand
Flexible on-stack replacement in LLVM
TLDR
A framework for OSR is presented that introduces novel ideas and combines features of existing techniques that no previous solution provided simultaneously and improves the state of the art in the optimization of the feval instruction, a performance-critical construct of the MATLAB language. Expand
Compiling with Continuations and LLVM Kavon Farvardin
LLVM is an infrastructure for code generation and low-level optimizations, which has been gaining popularity as a backend for both research and industrial compilers, including many compilers forExpand
Description and Optimization of Abstract Machines in a Dialect of Prolog*
TLDR
It is shown how the semantics of most basic components of an efficient virtual machine for Prolog can be described using (a variant of) Prolog, and how these descriptions are compiled to C and assembled to build a complete bytecode emulator. Expand
Trace-based just-in-time compilation for lazy functional programming languages
TLDR
This thesis investigates the viability of trace-based just-in-time (JIT) compilation for optimising programs written in the lazy functional programming language Haskell and implemented Lambdachine, a trace- based JIT compiler which implements most of the pure subset of Haskell. Expand
Language-parametric compiler validation with application to LLVM
TLDR
Keq is presented, the first program equivalence checker that is parametric to the input and output language semantics and has no dependence on the transformation between theinput and output programs. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 54 REFERENCES
LLVA: A Low-level Virtual Instruction Set Architecture
TLDR
This paper proposes a novel virtualISA (LLVA) and a translation strategy for implementing it on arbitrary hardware that enables offline translation and transparent offline caching of native code and profile information, while remaining completely OS-independent. Expand
A provably sound TAL for back-end optimization
TLDR
This work designed and implemented a low-level typed assembly language (LTAL) with a semantic model and established its soundness from the model, and built a prototype system that compiles most of core ML to Sparc code. Expand
Efficient and language-independent mobile programs
TLDR
Omniware uses software fault isolation, a technology developed to provide safe extension code for databases and operating systems, to achieve a unique combination of language-independence and excellent performance. Expand
Daisy: Dynamic Compilation For 10o?40 Architectural Compatibility
  • K. Ebcioglu, E. Altman
  • Computer Science
  • Conference Proceedings. The 24th Annual International Symposium on Computer Architecture
  • 1997
TLDR
The architectural requirements for such a VLIW, to deal with issues including self-modifying code, precise exceptions, and aggressive reordering of memory references in the presence of strong MP consistency and memory mapped I/O are discussed. Expand
A practical system fljr intermodule code optimization at link-time
TLDR
A system that takes a collection of object modules constituting the entire program, and converts the object code into a symbolic Register Transfer Language form that is then transformed by intermodule optimization and finally converted back into object form to explore the problem of code optimization at link-time. Expand
LLVA: a low-level virtual instruction set architecture
TLDR
This paper proposes a novel virtual ISA (LLVA) and a translation strategy for implementing it on arbitrary hardware that enables offline translation and transparent offline caching of native code and profile information, while remaining completely OS-independent. Expand
Simple and effective link-time optimization of Modula-3 programs
TLDR
Optimization techniques are implemented in mld, a retargetable linker for the MIPS, SPARC, and Intel 486, mld links a machine-independent intermediate code that is suitable for link-time optimization and code generation. Expand
The Transmeta Code Morphing#8482; Software: using speculation, recovery, and adaptive retranslation to address real-life challenges
TLDR
The Crusoe paradigm of aggressive speculation, recovery to a consistent x86 state using unique hardware commit-and-rollback support, and adaptive retranslation when exceptions occur too often to be handled efficiently by interpretation are presented. Expand
Implementing typed intermediate languages
TLDR
This paper describes the experience with implementing the FLINT typed intermediate language in the SML/NJ production compiler and observes that a type-preserving compiler will not scale to handle large types unless all of its type- Preserving stages preserve the asymptotic time and space usage in representing and manipulating types. Expand
Region-based memory management in cyclone
TLDR
This paper focuses on the region-based memory management of Cyclone and its static typing discipline, and combines default annotations, local type inference, and a novel treatment of region effects to reduce this burden. Expand
...
1
2
3
4
5
...