Tamarin on LLVM - More Numbers

After much tweaking, profiling, and hacking, we finally have a decent range of benchmarks and performance numbers of running Tamarin with LLVM's JIT. The initial performance of LLVM's JIT, which showed a neutral result, stay the same across both the V8 and Sunspider benchmark suites across fully typed, partially typed, and untyped ActionScript code.

Methodology

Each benchmark is compared against NanoJIT on fully type annotated ActionScript code. First, we fully type annotated the ActionScript source and looped the benchmark so that execution takes 30 seconds with NanoJIT. We then modified each of the partially typed and untyped benchmarks to execute the same number of loop iterations. All evaluations are performed on a 32 bit version of Windows 7, using an Intel Core 2 Duo E8400 3.0 Ghz Processor and 4 gigs of ram.

We have three test configurations:

  • An unmodified tamarin-redux branch with NanoJIT.
  • A base TESSA configuration which performs no high level optimizations and is a straightforward translation from TESSA IR to LLVM IR.
  • An optimized TESSA Configuration which does primitive method inlining, type inference, and finally dead code elimination. The optimized TESSA IR is then translated to LLVM IR. 

Every benchmark graph in the following sections are always compared to NanoJIT on typed code. A value of 100% indicates that the configuration is equal to NanoJIT on typed code. A value less than 100% means that the configuration is slower than NanoJIT and a value greater than 100% means that the configuration is faster than NanoJIT. All images are thumbnails and can be clicked on for a bigger version.

Full Disclosure: The v8/earley-boyer, and the sunspider benchmarks regexp-dna, date-format-xparb, string-base64, string-tagcloud, and s3d-raytrace are not represented in the following charts. V8 earley-boyer, regexp-dna, and string-base64 had known verifier bugs and are fixed in an updated tamarin-redux branch. The sunspider benchmarks date-format-xparb, and string-tagcloud use eval, which is not supported in ActionScript. The s3d-raytrace benchmark has a known JIT bug in the Tamarin-LLVM branch.

Fully Typed Code

When we look at the geometric mean across all test cases, we see that the performance numbers aren't really moving that much. However, looking at the individual test cases show a very wide sway either way. Overall, LLVM's JIT is roughly equal to NanoJIT. Let's first look at the big wins.

 Sunspider bitwise-and is 2.6X times faster with LLVM than with NanoJIT because of register allocation. The test case is a simple loop that performs one bitwise operation on one value. LLVM is able to keep both the loop counter and the bitwise and value in registers while NanoJIT must load and store each value across each loop iteration. v8/Crypto's performance gains originate from inlining the number to integer, which is detailed here. We see that type inference buys us a little more performance because we keep some temporary values in numbers rather than downcasting them to integers, buying a little more performance.

The biggest loss comes in the sunspider controlflow-recursive benchmark, which calculates the Fibonacci number. At each call to fib(), the property definition for fib() has to be resolved then called. However, the property lookup is a PURE method, which means it has no side effects, and can be common sub expression eliminated. LLVM's current common subexpression eliminator cannot eliminate the lookup call and must execute twice as many lookups as NanoJIT, resulting in the performance loss.

The other big hit is with v8 / splay, where type inference actually makes us lose performance. This originates from a fair number of poor type inference decisions in our type inference algorithm, creating more expensive type conversions than necessary. For example, two variables are declared an integer, but we determine one value is an integer and the other an unsigned integer. To actually compare the two variables, we have to convert both to floating point numbers and perform a floating compare.

Partially Typed Code

Partially typed code has all global variables and method signatures fully type annotated. All local variables in a method remain untyped. Here are the results:

 The most striking thing is that you instantly lose 50% of your performance just because of the lack of type information. Thankfully, some basic type inference is able to recover ~20% of the performance loss. However, the same story arises. NanoJIT is able to hold it's own against LLVM's JIT as both have roughly equal performance. We also start to see the payoffs with doing high level optimizations such as type inference.

The big wins again come in the bitops benchmarks for the same reason LLVM won in the typed benchmark. LLVM has a better register allocator and can keep values in registers rather than loading / storing values at each loop iteration. TESSA with type inference is able to win big on the sunspider/s3d and sunspider/math benchmarks for two key reasons. First, we're able to discern that some variables are arrays. Next, each array variable is being indexed by an integer rather than an any value. We can then compile array access code rather than generic get/set property code, enabling the performance gain. We also win quite a bit in v8/deltablue because we inline deltablue's many small methods and infer types through the inlined method.

However, we suffer big performance losses in cases such as access-nbody because the benchmark is dominated by property access code. Once we cannot statically bind any property names, property lookup and resolution become the major limiting factor in the benchmark. Finally, the biggest performance loss due to TESSA optimizations occurs in math-cordic for an unfortunate reason. We are able to determine one variable is a floating point number type, while another variable remains the any type. To compare the two values, we have to convert the number type to the any type at each loop iteration. The conversion also requires the GC to allocate 8 bytes of memory. So we suffer performance both from the actual conversion itself and the extra GC work required to hold the double.

Finally, we see that some benchmarks don't suffer at all from the lack of type information such as the string and regexp benchmarks. These benchmarks do not stress JIT compiled code and instead only stress VM C++ library code.

Untyped Code

Untyped code is essentially JavaScript code with all variables untyped. And the results are:

 Overall, you lose another 5-10% going from partially typed code to untyped code, and our high level optimizations aren't able to recover as much as we hoped. The general trend however is the same: LLVM's JIT and NanoJIT have essentially the same performance without high level optimizations. The big wins occur on the same benchmarks, and the performance wins/losses occur on the same tests for the same reasons, just all the performance numbers are lower.

LLVM Optimization Levels

When I first saw the results, I thought I must really be doing something incredibly wrong. I read the LLVM docs, Google FUed my way around to see what knobs LLVM had that could change performance. While most gcc flags work with LLVM, the most important one was the optimization level. LLVM has four optimization levels that generally correspond to GCC's -O flag: NONE (-O0), LESS (-O1), DEFAULT (-O2), and AGGRESSIVE (-O3). Here are the results of tweaking LLVM's optimization level with the V8 suite:

All results are against the baseline NONE level. We see that LLVM's optimizations are improving performance quite a bit, especially in the crytpo benchmark. However, after the LESS optimizations, we unfortunately, aren't seeing any performance gains. Thus, all benchmarks shown previously used the LESS optimization levels.

What next?

Overall, we see that LLVM's JIT isn't buying us much performance in the context of the Tamarin VM. Our experience leads us to the same conclusion as Reid Kleckner with his reflections on unladen-swallow. You really need a JIT that can take advantage of high level optimizations. We'll probably leave LLVM's JIT and convert TESSA IR into NanoJIT LIR and improve NanoJIT as much we can.

Calling C++ in LIR

JIT compiled code needs to call C++ methods to do the heavy lifting. Metadata about C++ methods such as the parameters, the return type, etc is needed to create LIR that can call into those methods. Tamarin does this through the use of the CallInfo data structure:

struct CallInfo
{
    uintptr_t   _address;        // Address of the method
    uint32_t    _argtypes:27;    // 9 3-bit fields indicating arg type
    AbiKind     _abi:3;
 
    verbose_only ( const char* _name; )
};

The _address field is the address of the C++ method.

The _argtypes field is a bit encoding of the number of parameters, the types of each parameter, and the return type. For example, consider a C++ method that has the declaration:

void someFunction(int someParameter, double otherParameter);

 

Void types are represented by the decimal number 0 (binary 000). Integer types are represented by the decimal number 2 (binary 010). Double types are represented by the decimal number 1 (binary 001). The _argtypes, in binary would look like:

        010 | 001 | 000

The parameters are laid out in reverse order, with the return type being the right most 3 bits. Since there are only 27 bits, LIR can only make calls to a C++ method that have at most 8 parameters.

The AbiKind represents how the C++ method is called.

enum AbiKind {
    ABI_FASTCALL,
    ABI_THISCALL,
    ABI_STDCALL,
    ABI_CDECL
};

 

FastCall stores a few parameters into registers instead of pushing them onto the stack. A THISCALL means that the C++ method is part of a class, and requires that the instance of an object be passed in. I haven't seen STDCALL be used at all. A CDECL call stands for C declaration, where all parameters are pushed onto the stack prior to calling the method.

All these CallInfo structures are manually created and maintained in Tamarin in the file core/jit-calls.h. Consider the C++ method declaration:

static void AvmCore::atomWriteBarrier(MMgc::GC *gc, const void *container, 
Atom *address, Atom atomNew);

 

The macro declaration to create the CallInfo structure for atomWriteBarrier is:

FUNCTION(FUNCADDR(AvmCore::atomWriteBarrier), SIG4(V,P,P,P,A), atomWriteBarrier)

 

Each of these funky words is another macro:

  • FUNCTION - AvmCore::atomWriteBarrier uses the ABI_CDECL calling convention.
  • FUNCADDR - This is a static method.
  • SIG4(V,P,P,P,A) - Represents the CallInfo::_argtypes field. V is a void method. The next 3 parameters are (P)ointer types. The last parameter is an (A)tom type.
  • atomWriteBarrier - The name of the method, which is used for debugging purposes.

The current list of C++ methods all manually created and maintained, which is a really annoying hassle. With the LLVM bitcode to LIR translator, the number of CallInfos grows dramatically because C++ methods usually call lots of other C++ methods. Consider the atomWriteBarrier code:

void AvmCore::atomWriteBarrier(MMgc::GC *gc, const void *container, Atom *address, Atom atomNew)
{ 
    decr_atom(*address);
    incr_atom(gc, container, atomNew);
    *address = atomNew;
}

 

There are no CallInfo structures for decr_atom() and incr_atom(), but they have to be created. If a targeted inline C++ method calls a C++ method that doesn't already have a CallInfo structure, a new CallInfo for the newly called C++ method must be created.

While the list can manually be created, it would be horrendously annoying. Instead, we automatically create a new file called jit-calls-generated.h that contains all the new generated CallInfo structures.

Automatically creating a CallInfo structure:

Consider the C++ method declaration for decr_atom:

static void decr_atom(Atom const a);

 

The declaration in LLVMbitcode:

define void @AvmCore::decr_atomEi(i32 %a); 

 

When translating bitcode to LIR, the bitcode contains the parameter and return types. It also provides the name of the method. The last missing piece of the AbiKind.

LLVM bitcode has an explicit FASTCALL modifier for methods that use the FASTCALL calling convention. However, bitcode contains no explicit distinction between a CDECL and THISCALL calling convention. The call site in bitcode only says that a pointer is being passed into a method. The distinction between a CDECL/THISCALL is found at the function definition, NOT declaration.

The LLVM function definition for a THISCALL looks like:

define i32 @Toplevel::add2(%"struct.avmplus::Toplevel"* %this, i32 %leftOperand, i32 %rightOperand); 

 

The first parameter is explicitly named "this". If the first parameter of a function definition is named "this", then the C++ method uses the THISCALL AbiKind. This detection scheme is safe because "this" is a keyword in C++, and you can't name a value "this". The function declaration would show the name of the instance being passed in, rather than the name "this", therefore being incorrect.

What can't be called:

While the majority of C++ methods can be called from LIR, there are some limitations. A C++ object's constructor cannot be called as it is against the C++ specification to get the address of a constructor (C++ Standard, section 12.1.12). At the moment, we create a C++ wrapper that calls the constructor. LIR then calls the wrapper. A polymorphic call cannot be called as it's a runtime lookup based on a virtual method table.

Example C++ to LIR Translation

There are a lot of translation steps to go from C++ to bitcode to LIR. This post hopefully solidifies everything with a concrete example. Consider ActionScript source that adds two variables and assigns the sum to another variable.

var a;
var b;
var sum = a + b; 

 

The LIR that is normally generated is:

left = load left[0]
right = load right[0]
add = icall #add ( left, right )
 
store vars[32] = add

 

First, the two variables left and right, which are a and b in the AS3 source code, are loaded. Next, a call to the VM C++ method add is generated. The result of add is stored into LIR vars[32], which is a location in memory and represents the AS3 variable sum.

Instead of calling Add, the JIT should inline the method. To do that, C++ has to be converted to LIR. The VM C++ method that does the actual add has the source:

Atom Toplevel::add(Atom left, Atom right)
{
    BITCODE_INLINEABLE  // Indicate that we want to translate this method to LIR
    if (areNumbers(left, right)) {        
return
addNumbers(left, right);
    }
    else {
        return addUnknown(left,right);
    }
}

 

The add method is part of a C++ object named Toplevel. ActionScript values are modeled as Atoms in C++. An Atom is a 32 bit integer with the bottom 3 bits used for type information. The C++ source checks to see what the "+" operator does depending on the types of the two values. Once Tamarin is compiled with llvm, it produces bitcode that looks like:

define i32 @add2(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) {
entry:
    call void @enableBitcodeInlining() // The macro expansion of BITCODE_INLINEABLE
 
    %0 = call i8 @areNumbers(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) 
    %toBool = icmp eq i8 %0, 0      
 
    // if true go to addNumbers, otherwise addUnknown
    br %toBool, label %addNumbers, label %addUnknown    
 
addNumbers:     
    %1 = call i32 @addNumbers(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) 
    ret i32 %1
 
addUnknown:       
    %2 = call i32 @addUnknown(%"struct.avmplus::Toplevel"* %this, i32 %left, i32 %right) 
    ret i32 %2
}

 

The LLVM bitcode is in SSA form and retains all of the type information and control flow. The call to enableBitcodeInlining is the C++ source macro BITCODE_INLINEABLE. The static translator looks for a call to enableBitcodeInlining as an indicator to translate the method to LIR. The llvm type i32 represents an integer, which is what a C++ Atom really is. Finally, the resulting LIR once the C++ add method is inlined into the LIR instead of being called is:

left = load leftp[0]
right = load rightp[0]
 
inline( left, right )
    retVal = stackAlloc 4
    isNumberAdd = icall #areNumbers (left, right)
 
    eq1 = eq isNumberAdd, 0
    jump true eq1 -> addNumbersLabel
    jump false eq1 -> addUnknownLabel
 
addNumbersLabel:
    addNumbers = icall #addNumbers (left, right)
    store retVal[0] = addNumbers
    jump -> endInline 
 
addUnknownLabel:
    addUnknown = icall #addUnknown (left,right)
    store retVal[0] = addUnknown
    jump -> endInline 
 
endInline:
 
ld5 = load retVal[0]
sti vars[32] = ld5

 

The LIR still needs to load the left and right operands prior to inlining the method. Space is then allocated on the stack for the return value of the method. The C++ return statements become LIR store values at the allocated stack location. The C++ call statements remain LIR call statements, unless they too are explicitly inlined. In the original LIR, the value returned from call add was stored into the location vars[32]. Now, the return value is loaded from retVal and stored again into vars[32], completing the inlining of C++ add.

Although not in this example, the translator takes the same approach used by return statements for Phi functions. LIR doesn't have a Phi opcode. Instead, stores are pushed up to the basic block where the value is flowing into the Phi function. The actual Phi turns into a load from the store location.

Also, the translator always follows LLVM semantics and creates both the true and false branches in LIR. Future work is to optimize away one of the branches and add a LIR_phi instruction.

The Developer Productivity Case for C++ Translation

Why go through all this trouble to translate C++ into LIR? The "easiest" or most direct route is to manually create and inline LIR that represents the functionality we want. Consider adding two values:

if (areNumbers(left, right)) {
    // do integer add
    return sum
}
else {
    // do lots of look ups and checks
    return sum
} 

 

We could create the LIR that represents areNumbers and the x86 integer add, and have it up and running tomorrow. The problem is that it becomes a maintanence hassle. Tamarin would have a bunch of manually inlined LIR snippets that is difficult to understand and debug. The following is the simplified equivalent LIR for the code snippet above:

ld1 = load vars[4]
ld2 = load vars[8]
 
areNumbers1 = call areNumbers(ld1, ld2)
eq1 = eq areNumbers, 0
jump false slowPath
 
   // fall through to fast path
   // do integer add
   store returnValue[0], sum
 
slowPath:
   // more checks
   store returnValue[0], sum
 
ld3 = returnValue[0]
ret ld3

 

It's a lot nicer to just code the functionality in C++, where its a lot more maintainable, malleable, and easier to debug. The counter-argument is that you have to develop the translation program and maintain another piece of code. While that's true, overall, there is less work, and a lot less pain. (Debugging LIR is a painful process). Developer productivity is actually the main reason Adobe's investing in what I consider an "infrastructure" software update.

Adding More Cowbell to Tamarin

Many dynamic language virtual machines (VM) written in C++ work by compiling code that passes parameters into VM C++ methods which do most of the heavy lifting. For example, adding two values becomes two x86 pushes onto the stack with a x86 call into a C++ method add. When I last looked at Apple's SquirrelFish, that's all it did. The generated x86 code in Tamarin more or less does the same thing with a few optimizations. And having such a simple compiler works surprisingly well, but it only takes you so far. Sooner or later, you'll want to generate better, faster machine code.

An effective way of generating faster code is method inlining. Instead of generating calls, the JIT should inline the methods that do the real work. The problem for Tamarin's JIT, is that it doesn't know how to inline C++ methods. NanoJIT only compiles LIR, not C++. Enter the C++ to LIR translator.

We do this by statically compiling Tamarin with LLVM. LLVM is a pretty awesome GCC replacement that is nicely designed, object oriented, and generally a pleasure to work with. The output of LLVM is like an object file, but instead contains LLVM bitcode. This new tool then translates LLVM bitcode into LIR. At runtime, Tamarin compiles the LIR, which really represents a VM C++ method. Boom, inlinable C++ methods.

We don't have to apply the translator to only method inlining. In essence, Tamarin compiles VM C++ methods as if they were ActionScript methods. Application ActionScript, such as a youtube video, prior to execution, is translated into LIR, which is then compiled into machine code. We also translate most of Tamarin, written in C++, into LIR, which is then compilable into machine code by NanoJIT. NanoJIT has the power to compile, inline, and optimize VM C++ methods for the performance win.

You may be thinking, hmm this sounds an awfully like a self-interpreting VM. A self-interpreting VM is a VM written in the language it executes. For example, writing a JavaScript VM in JavaScript would make it self-interpreting. Well, you're right. This approach hacks in self-interpreting/JITting into a VM written in C++.

Consider the three images above just to clarify the differences. The "standard" VM approach, and what Tamarin did before, is to JIT code that calls into C++ methods. The host C++ compiler (Visual Studio/GCC) does a lot of work, and the JIT has no idea what was going on. In a self-interpreting VM, the JIT compiles everything, application and VM code, and knows everything about itself. This approach mimics a self-interpreting VM by translating C++ methods into LIR, and JITing both application and VM code.

Over the next few weeks, I'll be detailing the implementation/design decisions in Tamarin.

Thanks to Michael Bebenita for proofreading and creating the images. Checkout this great SNL skit if you are wondering what More Cowbell is.