Learn More
For network computing on desktop machines, fast execution of Java bytecode programs is essential because these machines are expected to run substantial application programs written in Java. Higher Java performance can be achieved by Just-in-Time (JIT) compilers which translate the stack-based bytecode into register-based machine code on demand. One crucial(More)
We describe a new algorithm for parallelization of sequential code that eliminates anti and output dependence8 by renaming registers on an as-needed basis during scheduling. A dataflow attribute at the beginning of each basic block indicates what operations are available for moving up through this basic block. Scheduling consists of choosing the best(More)
Graph-coloring register allocators eliminate copies by coalescing the source and target nodes of a copy if they do not interfere in the interference graph. Coalescing, however, can be harmful to the colorability of the graph because it tends to yield a graph with nodes of higher degrees. Unlike <i>aggressive coalescing</i>, which coalesces any pair of(More)
The Java language provides exceptions in order to handle errors gracefully. However, the presence of exception handlers complicate the job of a JIT (Just-in-Time) compiler, including optimizations and register allocation, even though exceptions are rarely used in most programs. This paper describes some mechanisms for removing overheads imposed by the(More)
More than half of the smart phones world-wide are currently employing the Android platform, which employs Java for programming its applications. The Android Java is to be executed by the Dalvik virtual machine (VM), which is quite different from the traditional Java VM such as Oracle's HotSpot VM. That is, Dalvik employs register-based bytecode while(More)
Java just-in-time (JIT) compilers improve the performance of a Java virtual machine (JVM) by translating Java bytecode into native machine code on demand. One important problem in Java JIT compilation is how to map stack entries and local variables to registers efficiently and quickly, since register-based computations are much faster than memory-based(More)
Instruction-level parallelism (ILP) in nonnumerical code is regarded as scarce and hard to exploit due to its irregularity. In this article, we introduce a new code-scheduling technique for irregular ILP called &#8220;selective scheduling&#8221; which can be used as a component for superscalar and VLIW compilers. Selective scheduling can compute a wide set(More)