SEJITS: Getting Productivity and Performance With Selective Embedded JIT Specialization


Today’s “high productivity” programming languages such as Python lack the performance of harder-toprogram “efficiency” languages (CUDA, Cilk, C with OpenMP) that can exploit extensive programmer knowledge of parallel hardware architectures. We combine efficiency-language performance with productivitylanguage programmability using selective embedded just-in-time specialization (SEJITS). At runtime, we specialize (generate, compile, and execute efficiencylanguage source code for) an application-specific and platform-specific subset of a productivity language, largely invisibly to the application programmer. Because the specialization machinery is implemented in the productivity language itself, it is easy for efficiency programmers to incrementally add specializers for new domain abstractions, new hardware, or both. SEJITS has the potential to bridge productivity-layer research and efficiency-layer research, allowing domain experts to exploit different parallel hardware architectures with a fraction of the programmer time and effort usually required.

4 Figures and Tables


Citations per Year

101 Citations

Semantic Scholar estimates that this publication has 101 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Catanzaro2009SEJITSGP, title={SEJITS: Getting Productivity and Performance With Selective Embedded JIT Specialization}, author={Bryan Catanzaro and Shoaib Kamil and Yunsup Lee and Krste Asanovic and James Demmel and Kurt Keutzer and John Shalf and Kathy Yelick and Armando Fox}, year={2009} }