• 11
• 14
• 12
• 10
• 11

# AOT Compiler kicked off

This topic is 2088 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello! Long time no speak.

This might not be a novel idea, but I just realized earlier today how easy it would be to create an Ahead Of Time compiler for AngelScript that simply spits out C/C++ code.

Using a variant of the test I used while working on my old arm JIT compiler a couple of years ago (just two nested for loops iterating for a bit), this is ~11x faster on that test (on OSX x86-64), but real world performance will vary depending on how your scripts actually look and what they actually do.

This is just a few hours old and I don't have a large AngelScript codebase to test it on, so there might be some tweaking needed for "real" projects.

Quoting myself from the github page I set up:

While there already exists an AngelScript JIT (just in time) Compiler for x86 and x86-64, as well as my old and never finished JIT compiler for arm, there is as far as I know no AOT (ahead of time) compiler available.

Let me first define what I mean with JIT and AOT compilers in the context of AngelScript today. The public JIT compilers I know of take the compiled AngelScript bytecode and turns it into machine code for the supported architecture. This machine code is written to memory marked as being executable and then when the normal AngelScript interpreter comes across a JitEntry byte-code it jumps to this executable memory instead of continuing its normal interpreter loop.

The AOT compiler implementation provided here will instead take the AngelScript bytecode and turn it into C/C++ code which you can then compile with your normal compiler. There are both advantages and disadvantages to this approach and so this project might not be of interest at all for you depending on what you are using AngelScript for.

Let me start by listing a few of the disadvantages:

• If you use AngelScript mainly as a plugin system where 3rd parties write scripts to extend the functionality of your application, this AOT is going to be of little use to you. If however your AngelScript code is a significant part of your software and it's reaching a point where it's starting to need fewer and fewer changes then the AOT might be perfect for you.

• You need to run your program first to generate the AOT code, and then compile and link this newly generated code and run again for the AOT to have any effect. This will limit iteration speed as the program will have to be stopped, compiled, re-linked and started again. In the future I might implement (or if someone submits a pull request) a checksum check to see if the compiled script function is different from what the script actually is doing and if it is fall back to the regular interpreter for that function.

Now to some of the advantages of this approach over a JIT:

• The AOT Compiler is minimal in code and the bits that generate the AOT code is in itself generated straight from AngelScript's code base. This means that:

1. It will always be in sync with the AngelScript interpreter since the same code is used
2. If new byte codes are added, changed or removed, it'll automatically adapt to this with you just needing to re-run the script that generates the bytecode generating code.
3. It will always do exactly what AngelScript does with the bytecode. Again as it's the same code the instances where you run into bugs where the compiled code behaves differently from the interpreter should be slim to none.

• As it's just using basic C/C++ code to generate more C/C++ code compilable with your regular toolchain, the AOT Compiler is machine architecture agnostic whereas JIT compilers only exist on very specific architectures. If you can compile and run AngelScript, you can compile and run the AOT compiler. The AngelScript features page mentions Windows, Linux, MacOS X, XBox, XBox 360, PS2, PSP, PS3, Dreamcast, Nintendo DS, Windows Mobile, iPhone, BSD, and Android, with x86, amd64, sh4, mips, ppc, ppc64, arm, so that's what I'll do too ;)

• No need for allocating executable memory. Not all platforms allow you to allocate executable memory to prevent malicious software or piracy so using a JIT compiler would be impossible on those platforms, but this AOT implementation works just fine.

[/quote]

##### Share on other sites
i tried to test it, but it seems there is no love for vc++
and i am stuck with vc++ in windows.

extended assembly is not available in vc++
snprintf doesnt exist. it is called _snprintf in vc++
sys/time.h and gettimeofday not avaliable in vc++

aot would be nice when time comes to distribution,
horrible to develop with tho

##### Share on other sites
Yeah, it's only a couple of hours old so have wrinkles to iron out.

I intend to implement a bytecode checksum check and make it possible to attach a JIT to the AOT compiler, in other words if a script has been modified it'll fall back to the interpreter or JIT for the modified script functions. That should make the development phase less awkward.

##### Share on other sites
Welcome back quarnster

I really enjoy seeing these sister projects come and evolve. I'll keep my eye on this one.

##### Share on other sites
Which opcodes modify the bytecode stream from one run to another? At the moment I'm assuming all asBC_PTRARG's need to be re-read from the bytecode on the next load as opposed to for example asBC_FLOATARG which will be the same from one run to another and can thus be "inlined" at AOT code generation time.

For anyone interested I got the AOT working with the test_performance suite. Haven't verified whether the calculated values by the AOT is actually correct so some of these numbers might be too optimistic (but the fact that it's not crashing is a very good sign), but here's the data I captured on my machine where the old column is with the AOT disabled, and the new column with it enabled.

 Testname old new improvement Basic 1.179 0.476 2.4769 Basic2 0.700 0.039 17.9487 Call 2.378 2.364 1.0059 Call2 2.880 2.869 1.0038 Fib 1.851 0.750 2.4680 Int 0.310 0.138 2.2464 Intf 1.672 1.332 1.2553 Mthd 1.609 1.250 1.2872 String 2.018 1.725 1.1699 String2 1.033 0.818 1.2628 StringPooled 1.971 1.672 1.1788 ThisProp 1.238 0.093 13.3118 Vector3 0.584 0.431 1.3550 Assign.1 0.612 0.039 15.6923 Assign.2 0.874 0.059 14.8136 Assign.3 0.716 0.031 23.0968 Assign.4 1.134 0.067 16.9254 Assign.5 1.135 0.086 13.1977 

##### Share on other sites
The changes I did to run this with the test suite is checked into the playground branch in my git AngelScript repository (link compares it to the patchesforandreas branch which contains other unrelated patches):
https://github.com/quarnster/angelscript/compare/patchesforandreas...playground

Basically to repeat the experiment, you'll have to remove the _generated.cpp files from the CMakeLists.txt and make sure the define is 1 utils.h to get the tests to spit out the code for AOT compilation. Then you re-add the _generated.cpp files to the CMakeLists.txt and make sure the define in utils.h is 0 to make use of these compiled functions rather than generating code again.

##### Share on other sites
How can I differ a script class's member method from a pure interface member?

Basically the AOT is able to call into another AOTed function without falling back to the interpreter. This works fine for BC_CALL, but in CALLINTF how do I know whether the function is a pure interface function (and thus will not have a specific corresponding jit function) or if its a regular class member?

Basically:

 // We don't want the AOT to generate this code from test_intf.cpp as it's a pure interface and //_____TestIntf_intf_func0_ will not be generated. What can I use in __func to determine that it's // a pure interface function and thus not generate this if-scope at all for this function? if (__func->jitFunction == _____TestIntf_intf_func0_) { _____TestIntf_intf_func0_(registers, 0); } 

##### Share on other sites
asBC_CALLINTF is only used for calling interface methods and virtual class methods. In both of these cases a lookup is done to find the real function that should be called.

Take a look at asCContext::CallInterfaceMethod() for the details.

##### Share on other sites

Which opcodes modify the bytecode stream from one run to another? At the moment I'm assuming all asBC_PTRARG's need to be re-read from the bytecode on the next load as opposed to for example asBC_FLOATARG which will be the same from one run to another and can thus be "inlined" at AOT code generation time.

The void asCReader::TranslateFunction(asCScriptFunction *func) method should be able to clarify these details for you. It shows all the adjustments that are done to the bytecode after reading a pre-compiled script.

Pointer args obviously needs to be evaluated, but also function ids, type ids, and string ids. The ids depend on the order of compilation, so if you set the restriction that the code must always be loaded in the correct order, then the ids should always remain the same with each run.

Regards,
Andreas

##### Share on other sites

asBC_CALLINTF is only used for calling interface methods and virtual class methods. In both of these cases a lookup is done to find the real function that should be called.

Take a look at asCContext::CallInterfaceMethod() for the details.

So are regular script class methods implicitly virtual? In my test code I have:

"class TestClass \n" "{ \n" " void test() {print(\"hello world!\\n\");}\n" "}; \n" // .. "int TestInt(int a, int b, int c) \n" "{ \n" " TestClass t; \n" " t.test(); \n" //.. 

The t.test() call generates a CALLINTF opcode.

Pointer args obviously needs to be evaluated, but also function ids, type ids, and string ids. The ids depend on the order of compilation, so if you set the restriction that the code must always be loaded in the correct order, then the ids should always remain the same with each run.

I see, I should probably create a setting in the AOT compiler at some point in time so that it doesn't assume id's will be the same.

Cheers Edited by quarnster