Tamarin on LLVM - More Numbers

After much tweaking, profiling, and hacking, we finally have a decent range of benchmarks and performance numbers of running Tamarin with LLVM's JIT. The initial performance of LLVM's JIT, which showed a neutral result, stay the same across both the V8 and Sunspider benchmark suites across fully typed, partially typed, and untyped ActionScript code.

Methodology

Each benchmark is compared against NanoJIT on fully type annotated ActionScript code. First, we fully type annotated the ActionScript source and looped the benchmark so that execution takes 30 seconds with NanoJIT. We then modified each of the partially typed and untyped benchmarks to execute the same number of loop iterations. All evaluations are performed on a 32 bit version of Windows 7, using an Intel Core 2 Duo E8400 3.0 Ghz Processor and 4 gigs of ram.

We have three test configurations:

  • An unmodified tamarin-redux branch with NanoJIT.
  • A base TESSA configuration which performs no high level optimizations and is a straightforward translation from TESSA IR to LLVM IR.
  • An optimized TESSA Configuration which does primitive method inlining, type inference, and finally dead code elimination. The optimized TESSA IR is then translated to LLVM IR. 

Every benchmark graph in the following sections are always compared to NanoJIT on typed code. A value of 100% indicates that the configuration is equal to NanoJIT on typed code. A value less than 100% means that the configuration is slower than NanoJIT and a value greater than 100% means that the configuration is faster than NanoJIT. All images are thumbnails and can be clicked on for a bigger version.

Full Disclosure: The v8/earley-boyer, and the sunspider benchmarks regexp-dna, date-format-xparb, string-base64, string-tagcloud, and s3d-raytrace are not represented in the following charts. V8 earley-boyer, regexp-dna, and string-base64 had known verifier bugs and are fixed in an updated tamarin-redux branch. The sunspider benchmarks date-format-xparb, and string-tagcloud use eval, which is not supported in ActionScript. The s3d-raytrace benchmark has a known JIT bug in the Tamarin-LLVM branch.

Fully Typed Code

When we look at the geometric mean across all test cases, we see that the performance numbers aren't really moving that much. However, looking at the individual test cases show a very wide sway either way. Overall, LLVM's JIT is roughly equal to NanoJIT. Let's first look at the big wins.

 Sunspider bitwise-and is 2.6X times faster with LLVM than with NanoJIT because of register allocation. The test case is a simple loop that performs one bitwise operation on one value. LLVM is able to keep both the loop counter and the bitwise and value in registers while NanoJIT must load and store each value across each loop iteration. v8/Crypto's performance gains originate from inlining the number to integer, which is detailed here. We see that type inference buys us a little more performance because we keep some temporary values in numbers rather than downcasting them to integers, buying a little more performance.

The biggest loss comes in the sunspider controlflow-recursive benchmark, which calculates the Fibonacci number. At each call to fib(), the property definition for fib() has to be resolved then called. However, the property lookup is a PURE method, which means it has no side effects, and can be common sub expression eliminated. LLVM's current common subexpression eliminator cannot eliminate the lookup call and must execute twice as many lookups as NanoJIT, resulting in the performance loss.

The other big hit is with v8 / splay, where type inference actually makes us lose performance. This originates from a fair number of poor type inference decisions in our type inference algorithm, creating more expensive type conversions than necessary. For example, two variables are declared an integer, but we determine one value is an integer and the other an unsigned integer. To actually compare the two variables, we have to convert both to floating point numbers and perform a floating compare.

Partially Typed Code

Partially typed code has all global variables and method signatures fully type annotated. All local variables in a method remain untyped. Here are the results:

 The most striking thing is that you instantly lose 50% of your performance just because of the lack of type information. Thankfully, some basic type inference is able to recover ~20% of the performance loss. However, the same story arises. NanoJIT is able to hold it's own against LLVM's JIT as both have roughly equal performance. We also start to see the payoffs with doing high level optimizations such as type inference.

The big wins again come in the bitops benchmarks for the same reason LLVM won in the typed benchmark. LLVM has a better register allocator and can keep values in registers rather than loading / storing values at each loop iteration. TESSA with type inference is able to win big on the sunspider/s3d and sunspider/math benchmarks for two key reasons. First, we're able to discern that some variables are arrays. Next, each array variable is being indexed by an integer rather than an any value. We can then compile array access code rather than generic get/set property code, enabling the performance gain. We also win quite a bit in v8/deltablue because we inline deltablue's many small methods and infer types through the inlined method.

However, we suffer big performance losses in cases such as access-nbody because the benchmark is dominated by property access code. Once we cannot statically bind any property names, property lookup and resolution become the major limiting factor in the benchmark. Finally, the biggest performance loss due to TESSA optimizations occurs in math-cordic for an unfortunate reason. We are able to determine one variable is a floating point number type, while another variable remains the any type. To compare the two values, we have to convert the number type to the any type at each loop iteration. The conversion also requires the GC to allocate 8 bytes of memory. So we suffer performance both from the actual conversion itself and the extra GC work required to hold the double.

Finally, we see that some benchmarks don't suffer at all from the lack of type information such as the string and regexp benchmarks. These benchmarks do not stress JIT compiled code and instead only stress VM C++ library code.

Untyped Code

Untyped code is essentially JavaScript code with all variables untyped. And the results are:

 Overall, you lose another 5-10% going from partially typed code to untyped code, and our high level optimizations aren't able to recover as much as we hoped. The general trend however is the same: LLVM's JIT and NanoJIT have essentially the same performance without high level optimizations. The big wins occur on the same benchmarks, and the performance wins/losses occur on the same tests for the same reasons, just all the performance numbers are lower.

LLVM Optimization Levels

When I first saw the results, I thought I must really be doing something incredibly wrong. I read the LLVM docs, Google FUed my way around to see what knobs LLVM had that could change performance. While most gcc flags work with LLVM, the most important one was the optimization level. LLVM has four optimization levels that generally correspond to GCC's -O flag: NONE (-O0), LESS (-O1), DEFAULT (-O2), and AGGRESSIVE (-O3). Here are the results of tweaking LLVM's optimization level with the V8 suite:

All results are against the baseline NONE level. We see that LLVM's optimizations are improving performance quite a bit, especially in the crytpo benchmark. However, after the LESS optimizations, we unfortunately, aren't seeing any performance gains. Thus, all benchmarks shown previously used the LESS optimization levels.

What next?

Overall, we see that LLVM's JIT isn't buying us much performance in the context of the Tamarin VM. Our experience leads us to the same conclusion as Reid Kleckner with his reflections on unladen-swallow. You really need a JIT that can take advantage of high level optimizations. We'll probably leave LLVM's JIT and convert TESSA IR into NanoJIT LIR and improve NanoJIT as much we can.

Tamarin on LLVM

LLVM is a great and mature piece of software with lots of support from numerous avenues. Naturally, one of the many ideas that floats around is how would Flash perform if the backend was LLVM instead of NanoJIT? [1]. LLVM performs many more optimizations than NanoJIT and has proven to be rock solid. NanoJIT was built to compile code fast [2], not necessarily the best performing. In theory, LLVM should easily beat NanoJIT right? My summer internship was to find out.

Architecture

There are a couple of ways to replace the backend with LLVM. The first would be a direct one to one translation from an input ABC file to LLVM IR. From there, LLVM could take care of the rest. This would be the most straightforward method of comparing NanoJIT with LLVM as NanoJIT translates ABC to LIR, then compiles x86 machine code. However, we would also waste an opportunity to do a nice and clean architecture that would enable us to plugin multiple backends.

Instead, we convert an ABC file into a new intermediate representation in SSA. This representation is code named Type Enriched SSA (TESSA), which is a new object oriented IR that will retain and use all the high level information about an ActionScript program. The TESSA IR itself is very much a work in progress. So far, there isn't any type enriching occurring, and you can think of TESSA as an object oriented instruction hierarchy in SSA for the rest of this post. Once the ABC is translated to TESSA, we "lower" the TESSA IR into LLVM IR, at which point LLVM generates native machine code and can perform all of it's optimizations. The semantics of the LLVM IR we create are almost identical to the NanoJIT LIR that is created in the normal Tamarin branch.

Results

LLVM is an object oriented approach to compiler construction and thus takes a bit longer to generate machine code than NanoJIT. LLVM has virtually the same amount of time to startup as NanoJIT, but compiling each method takes longer, which adds up over the course of a benchmark. This is a problem since the V8 suite runs in a few hundred milliseconds. Since we really are interested in the performance of the generated machine code rather than compilation time, we need to minimize the time spent compiling methods. We can do this by looping each V8 benchmark a few thousand times so that each benchmark takes about 30 seconds to execute. All test cases have manual type annotations as well. All tests were run on a mid 2010 Macbook Pro 2.66 Ghz Core i7, running on Windows 7 64 bit with LLVM 2.8. The results are:

We see a rather neutral result. Across the six benchmarks, LLVM wins or loses by a very small percentage, some of which is system noise. LLVM wins big on Richards but loses about the same amount on Crypto. What's really going on?

After some runs on VTune, we find that both Splay and RegExp spend most of their time in the VM and spend less than 2% of total execution time in jitted code. Raytrace spends 10% of it's time in jitted code and we see that LLVM is a little bit faster than NanoJIT. This means that LLVM can indeed produce better performing jit compiled code than NanoJIT. DeltaBlue, Richards, and Crypto spend around 30-50% of total execution time in jit compiled code and are the only test cases that are interesting to study.

DeltaBlue shows barely any difference because the benchmark contains many small method calls. Each method only performs a small calculation such as a comparison on a field or one branch. The execution time is dominated by the overhead of the method calls and therefore the types of optimizations LLVM performs won't help much because there isn't much to optimize. 

Richards is an interesting test because this is where LLVM wins by a decent margin. Here LLVM's optimization passes do quite a bit, with the biggest win coming from the fact that LLVM IR is in SSA while NanoJIT isn't. NanoJIT has to spill values out of registers in loops while LLVM is smart enough to keep them in registers, reducing the number of loads and stores in the hottest method by 25%. The whole +10% can't be attributed to one loop, but LLVM makes small gains across all the methods in Richards. 

Crypto is dominated by one method that performs numerous arithmetic operations in a tight loop in one method that dominates 90% of the execution time in jit compiled code. LLVM is losing here because of an unfortunate instruction scheduling decision which causes 65% more CPU pipeline stalls. However, after inlining the number to integer conversion, and applying Werner Sharp's patch to inline the vector get/set methods, LLVM's code no longer has the pipeline stall. NanoJIT and LLVM hit parity in this case.

Overall, the results are great for NanoJIT. It also means that we have to work on other parts of the VM until Tamarin sees big improvements on the V8 suite. Be forewarned, this isn't a conclusive experiment in any way shape or form, and evaluating a whole back end on 4 test cases isn't overwhelming evidence. As we build up TESSA to support more benchmarks, we may find LLVM pull ahead by a wide margin. Until then, it looks like a rather neutral result.

  1. LightSpark by Alessandro Pignotti is an open source Flash Player implementation that uses LLVM.
  2. Below is a comparison of compilation time of LLVM versus NanoJIT. This chart is measured in multiples, not percentage. (Eg Crypt compiles 6x slower with LLVM than NanoJIT). Note that the compilation time cannot be completely attributed to LLVM. The TESSA IR heavily uses the visitor pattern to traverse the object graph which contributes significantly to the compilation time. I don't have any hard numbers on how much time is spent in LLVM versus the TESSA IR.

    I interpret this finding as LLVM isn't necessarily slow versus NanoJIT compiles extremely fast. For example, the Richards benchmark takes 0.035 seconds to execute one iteration with NanoJIT. LLVM takes 0.110.

Updated to include compilation time.

Calling C++ in LIR

JIT compiled code needs to call C++ methods to do the heavy lifting. Metadata about C++ methods such as the parameters, the return type, etc is needed to create LIR that can call into those methods. Tamarin does this through the use of the CallInfo data structure:

struct CallInfo
{
    uintptr_t   _address;        // Address of the method
    uint32_t    _argtypes:27;    // 9 3-bit fields indicating arg type
    AbiKind     _abi:3;
 
    verbose_only ( const char* _name; )
};

The _address field is the address of the C++ method.

The _argtypes field is a bit encoding of the number of parameters, the types of each parameter, and the return type. For example, consider a C++ method that has the declaration:

void someFunction(int someParameter, double otherParameter);

 

Void types are represented by the decimal number 0 (binary 000). Integer types are represented by the decimal number 2 (binary 010). Double types are represented by the decimal number 1 (binary 001). The _argtypes, in binary would look like:

        010 | 001 | 000

The parameters are laid out in reverse order, with the return type being the right most 3 bits. Since there are only 27 bits, LIR can only make calls to a C++ method that have at most 8 parameters.

The AbiKind represents how the C++ method is called.

enum AbiKind {
    ABI_FASTCALL,
    ABI_THISCALL,
    ABI_STDCALL,
    ABI_CDECL
};

 

FastCall stores a few parameters into registers instead of pushing them onto the stack. A THISCALL means that the C++ method is part of a class, and requires that the instance of an object be passed in. I haven't seen STDCALL be used at all. A CDECL call stands for C declaration, where all parameters are pushed onto the stack prior to calling the method.

All these CallInfo structures are manually created and maintained in Tamarin in the file core/jit-calls.h. Consider the C++ method declaration:

static void AvmCore::atomWriteBarrier(MMgc::GC *gc, const void *container, 
Atom *address, Atom atomNew);

 

The macro declaration to create the CallInfo structure for atomWriteBarrier is:

FUNCTION(FUNCADDR(AvmCore::atomWriteBarrier), SIG4(V,P,P,P,A), atomWriteBarrier)

 

Each of these funky words is another macro:

  • FUNCTION - AvmCore::atomWriteBarrier uses the ABI_CDECL calling convention.
  • FUNCADDR - This is a static method.
  • SIG4(V,P,P,P,A) - Represents the CallInfo::_argtypes field. V is a void method. The next 3 parameters are (P)ointer types. The last parameter is an (A)tom type.
  • atomWriteBarrier - The name of the method, which is used for debugging purposes.

The current list of C++ methods all manually created and maintained, which is a really annoying hassle. With the LLVM bitcode to LIR translator, the number of CallInfos grows dramatically because C++ methods usually call lots of other C++ methods. Consider the atomWriteBarrier code:

void AvmCore::atomWriteBarrier(MMgc::GC *gc, const void *container, Atom *address, Atom atomNew)
{ 
    decr_atom(*address);
    incr_atom(gc, container, atomNew);
    *address = atomNew;
}

 

There are no CallInfo structures for decr_atom() and incr_atom(), but they have to be created. If a targeted inline C++ method calls a C++ method that doesn't already have a CallInfo structure, a new CallInfo for the newly called C++ method must be created.

While the list can manually be created, it would be horrendously annoying. Instead, we automatically create a new file called jit-calls-generated.h that contains all the new generated CallInfo structures.

Automatically creating a CallInfo structure:

Consider the C++ method declaration for decr_atom:

static void decr_atom(Atom const a);

 

The declaration in LLVMbitcode:

define void @AvmCore::decr_atomEi(i32 %a); 

 

When translating bitcode to LIR, the bitcode contains the parameter and return types. It also provides the name of the method. The last missing piece of the AbiKind.

LLVM bitcode has an explicit FASTCALL modifier for methods that use the FASTCALL calling convention. However, bitcode contains no explicit distinction between a CDECL and THISCALL calling convention. The call site in bitcode only says that a pointer is being passed into a method. The distinction between a CDECL/THISCALL is found at the function definition, NOT declaration.

The LLVM function definition for a THISCALL looks like:

define i32 @Toplevel::add2(%"struct.avmplus::Toplevel"* %this, i32 %leftOperand, i32 %rightOperand); 

 

The first parameter is explicitly named "this". If the first parameter of a function definition is named "this", then the C++ method uses the THISCALL AbiKind. This detection scheme is safe because "this" is a keyword in C++, and you can't name a value "this". The function declaration would show the name of the instance being passed in, rather than the name "this", therefore being incorrect.

What can't be called:

While the majority of C++ methods can be called from LIR, there are some limitations. A C++ object's constructor cannot be called as it is against the C++ specification to get the address of a constructor (C++ Standard, section 12.1.12). At the moment, we create a C++ wrapper that calls the constructor. LIR then calls the wrapper. A polymorphic call cannot be called as it's a runtime lookup based on a virtual method table.

Example C++ to LIR Translation

There are a lot of translation steps to go from C++ to bitcode to LIR. This post hopefully solidifies everything with a concrete example. Consider ActionScript source that adds two variables and assigns the sum to another variable.

var a;
var b;
var sum = a + b; 

 

The LIR that is normally generated is:

left = load left[0]
right = load right[0]
add = icall #add ( left, right )
 
store vars[32] = add

 

First, the two variables left and right, which are a and b in the AS3 source code, are loaded. Next, a call to the VM C++ method add is generated. The result of add is stored into LIR vars[32], which is a location in memory and represents the AS3 variable sum.

Instead of calling Add, the JIT should inline the method. To do that, C++ has to be converted to LIR. The VM C++ method that does the actual add has the source:

Atom Toplevel::add(Atom left, Atom right)
{
    BITCODE_INLINEABLE  // Indicate that we want to translate this method to LIR
    if (areNumbers(left, right)) {        
return
addNumbers(left, right);
    }
    else {
        return addUnknown(left,right);
    }
}

 

The add method is part of a C++ object named Toplevel. ActionScript values are modeled as Atoms in C++. An Atom is a 32 bit integer with the bottom 3 bits used for type information. The C++ source checks to see what the "+" operator does depending on the types of the two values. Once Tamarin is compiled with llvm, it produces bitcode that looks like:

define i32 @add2(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) {
entry:
    call void @enableBitcodeInlining() // The macro expansion of BITCODE_INLINEABLE
 
    %0 = call i8 @areNumbers(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) 
    %toBool = icmp eq i8 %0, 0      
 
    // if true go to addNumbers, otherwise addUnknown
    br %toBool, label %addNumbers, label %addUnknown    
 
addNumbers:     
    %1 = call i32 @addNumbers(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) 
    ret i32 %1
 
addUnknown:       
    %2 = call i32 @addUnknown(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) 
    ret i32 %2
}

 

The LLVM bitcode is in SSA form and retains all of the type information and control flow. The call to enableBitcodeInlining is the C++ source macro BITCODE_INLINEABLE. The static translator looks for a call to enableBitcodeInlining as an indicator to translate the method to LIR. The llvm type i32 represents an integer, which is what a C++ Atom really is. Finally, the resulting LIR once the C++ add method is inlined into the LIR instead of being called is:

left = load leftp[0]
right = load rightp[0]
 
inline( left, right )
    retVal = stackAlloc 4
    isNumberAdd = icall #areNumbers (left, right)
 
    eq1 = eq isNumberAdd, 0
    jump true eq1 -> addNumbersLabel
    jump false eq1 -> addUnknownLabel
 
addNumbersLabel:
    addNumbers = icall #addNumbers (left, right)
    store retVal[0] = addNumbers
    jump -> endInline 
 
addUnknownLabel:
    addUnknown = icall #addUnknown (left,right)
    store retVal[0] = addUnknown
    jump -> endInline 
 
endInline:
 
ld5 = load retVal[0]
sti vars[32] = ld5

 

The LIR still needs to load the left and right operands prior to inlining the method. Space is then allocated on the stack for the return value of the method. The C++ return statements become LIR store values at the allocated stack location. The C++ call statements remain LIR call statements, unless they too are explicitly inlined. In the original LIR, the value returned from call add was stored into the location vars[32]. Now, the return value is loaded from retVal and stored again into vars[32], completing the inlining of C++ add.

Although not in this example, the translator takes the same approach used by return statements for Phi functions. LIR doesn't have a Phi opcode. Instead, stores are pushed up to the basic block where the value is flowing into the Phi function. The actual Phi turns into a load from the store location.

Also, the translator always follows LLVM semantics and creates both the true and false branches in LIR. Future work is to optimize away one of the branches and add a LIR_phi instruction.

The Developer Productivity Case for C++ Translation

Why go through all this trouble to translate C++ into LIR? The "easiest" or most direct route is to manually create and inline LIR that represents the functionality we want. Consider adding two values:

if (areNumbers(left, right)) {
    // do integer add
    return sum
}
else {
    // do lots of look ups and checks
    return sum
} 

 

We could create the LIR that represents areNumbers and the x86 integer add, and have it up and running tomorrow. The problem is that it becomes a maintanence hassle. Tamarin would have a bunch of manually inlined LIR snippets that is difficult to understand and debug. The following is the simplified equivalent LIR for the code snippet above:

ld1 = load vars[4]
ld2 = load vars[8]
 
areNumbers1 = call areNumbers(ld1, ld2)
eq1 = eq areNumbers, 0
jump false slowPath
 
   // fall through to fast path
   // do integer add
   store returnValue[0], sum
 
slowPath:
   // more checks
   store returnValue[0], sum
 
ld3 = returnValue[0]
ret ld3

 

It's a lot nicer to just code the functionality in C++, where its a lot more maintainable, malleable, and easier to debug. The counter-argument is that you have to develop the translation program and maintain another piece of code. While that's true, overall, there is less work, and a lot less pain. (Debugging LIR is a painful process). Developer productivity is actually the main reason Adobe's investing in what I consider an "infrastructure" software update.