Sign in to follow this  
theoutfield

Performance Regression

Recommended Posts

theoutfield    291

I have been using Angelscript version 2.19.2 for quite a while.  I am in the process of porting an application to Linux and decided to download the latest library 2.26.2.  To test the two versions I was using the tutorial sample with the LineCallback not registered to increase performance (simple commented out the registration code).  I modify the script.as file in the tutorial to include this loop

 

    double x = 0.00;    
    int64 startTime = GetSystemTime();
    for(int i=0; i<10000; i++)
    {
	    x += i * 1.23 + 4.56;
    }
	
    int64 endTime = GetSystemTime();
    
    Print("Total time = " + (endTime - startTime)/1000.0 + "\n");

In version 2.19.2 this takes about 1ms on a 2.8GHz machine.  Version 2.26.2 takes about 4ms

 

When compiling the tutorial for version 2.19.2 I got the following error.  I simply commented out this line for now.

g++ -ggdb -I../../../../angelscript/include -D_LINUX_ -o obj/scriptstring.o -c ../../../../add_on/scriptstring/scriptstring.cpp
../../../../add_on/scriptstring/scriptstring.cpp: In function ‘void RegisterScriptString_Native(asIScriptEngine*)’:
../../../../add_on/scriptstring/scriptstring.cpp:646:86: error: invalid static_cast from type ‘<unresolved overloaded function type>’ to type ‘bool (*)(const string&, const string&) {aka bool (*)(const std::basic_string<char>&, const std::basic_string<char>&)}’
make: *** [obj/scriptstring.o] Error 1

I haven't tried any of the other versions yet.  Were there any major changes that would affect the loop performance? 

 

Thanks,

Tony

 

Share this post


Link to post
Share on other sites
theoutfield    291

Sorry, this was my mistake.   I had an extra zero in the loop.  I was trying to optimize the g++ compiler using flags -O3 and -march=native.  In the course of doing this I forgot that I updated the loop to 100000 because the time value was always returning 0.

Share this post


Link to post
Share on other sites
theoutfield    291

When I went back to my actual project, I had the same issue.  It turns out that there was a small difference between the two scripts.  I'm not sure why it makes such a big difference in the timing of the loop.

This is the orginal script.  The summation in the loop is using the variable "i".

 double x = 0.00;    
    int64 startTime = GetSystemTime();
    for(int i=0; i<10000; i++)
    {
	    x += i * 1.23 + 4.56;
    }
	
    int64 endTime = GetSystemTime();
    
    Print("Total time = " + (endTime - startTime)/1000.0 + "\n");

 

This is the typo which has bad performance on all versions that I tried.

 

 double x = 0.00;    
    int64 startTime = GetSystemTime();
    for(int i=0; i<10000; i++)
    {
	    x += x * 1.23 + 4.56;
    }
	
    int64 endTime = GetSystemTime();
    
    Print("Total time = " + (endTime - startTime)/1000.0 + "\n");

 

I have been using this script as a test case for my application when I changed Angelscript versions in the past.  When I changed platforms to Linux I manually coded the script which is what caused the difference.  When the timing difference was noticed I copied the script from my other computer which is when I realized the difference.  Even though this is not a useful script, I would be interested to know why there is such a large difference in the time it takes to run the script.

Share this post


Link to post
Share on other sites
theoutfield    291

For anyone interested, I tried to do the same basic loop with 100000 iterations in Python (2.7.3) and Angelcode (2.26.3) on the same Linux machine 2.8Ghz.  The results are approximately: Angelcode 4ms, Python 37ms.  The code I used for Python is shown below.  Note: I'm not a Python programmer, I just thought it would be an interesting comparison.  Modifying the loop with num replaced by x in Python increased the time to 73ms.

 

#!/usr/bin/python

import time

starttime = time.time()
x = 0.0;
for num in range(0,100000):
  x += num * 1.23 + 4.56
print 'Total time = ' , time.time() - starttime, ' seconds'
print x, 'is the answer' 

Share this post


Link to post
Share on other sites
ThyReaper    488

Since you are using Python 2.7.3 for that performance test, I suggest changing range to xrange. Especially for simple loop bodies it should be faster.

Share this post


Link to post
Share on other sites
j-locke    945

I threw the script together in Python 3 (saw very similar time results to your python tests). The answer I got using num in the calculation was 6150394500.000001. The answer I got using x in the calculation was infinity.

 

I speculate that either the much larger value is causing the slowdown or the multiplication of a double in the equation vs an integer. Both of those are pretty much shooting from the hip though. No testing done to confirm or dispute them.

 

While I had the loop put together I did change the number of loops a few times. Python seemed linear in the time it took to finish additional runs through the loop. About 3.9 ms for 10000 times, about 39 ms for 100000 times, about 395 ms for 1000000 times, about 3.994 ms for 10000000 times. And I stopped that little experiment after about 39.59 for 100 million times.

Share this post


Link to post
Share on other sites
KuroSei    580
If I am not totally mistaken you could calculate that without the loop entirely.
3 * (n+1) * (41 * n + 304) and the result divided by 200 where n is the upper limit for the loop.

Plus I'm not sure wether you even need those values or simply used the equotation for testing. :X
I'm quite tired so sorry if I'm offtopic.



I made some testing: I am mistaken. You need to use n-1.
For n = 10 i get 100.94 period9 and with my equotation for n=9 i get 100.95

With ( 3*n*(41*n+263) ) /200 i get 100.95 for n = 10.

It also checks for other values, so i guess Its fine now. X) Edited by KuroSei

Share this post


Link to post
Share on other sites
WitchLord    4677

The loop that uses i in the inner expression should take slightly longer (perhaps 25-30%) than the one that uses x, because the integer value needs to be converted to a double before it can be multiplied with 1.23. This conversion adds a couple of bytecode instructions that needs to be executed with each iteration of the loop.

 

When compiling the angelscript library in debug mode, i.e. without optimizations, this relationship holds true. However, interestingly enough, when compiling the angelscript library in release mode the loop that should take longer actually becomes slightly faster on my machine.

 

Both loops run approximately 10 times as fast in release mode, but the one that uses i is now ~20% faster than the one that uses x even though it still has more bytecode instructions to execute.

 

My guess is that this is because the branch predictions that the CPU performs happens to work better in the first case. 

 

Compiler optimizations are very unpredictable. While it is almost always true that they improve performance over non-optimized code one can never predict exactly by how much, as it depends on so many different variables (branch prediction, data and instruction cache sizes, etc). Different levels of optimizations can give drastically different results. It is even quite possible that g++ with -O2 gives better result than with -O3.

Share this post


Link to post
Share on other sites
theoutfield    291

Thanks for the response Andreas.  It makes good sense now that you explained it.  I'm pretty impressed that Angelscript is about 10x faster than Python for this simple test case.  I've finished up the port of my control app to Linux now.  I am using the PREEMPT_RT patch and I'm getting some pretty impressive results.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this