Sign in to follow this  
loboWu

Terrible performance deficiency on asCRestore

Recommended Posts

Hi:
I am using 2.21.0 for a long while.
When I start to upgrade to 2.24.0a or the latest svn,
I found a performance issue.

Lets see the code.

[source lang="cpp"]In 2.21.0
asCScriptFunction *asCRestore::ReadFunction(bool addToModule, bool addToEngine)
{
...........
if( func->funcType == asFUNC_SCRIPT )
{
engine->gc.AddScriptObjectToGC(func, &engine->functionBehaviours);

count = ReadEncodedUInt();
func->byteCode.Allocate(count, 0);
ReadByteCode(func->byteCode.AddressOf(), count);
...............
}
}

It allocate just one time for the function bytecode.[/source]

But In 2.24.0a~
[source lang="cpp"]void asCReader::ReadByteCode(asCScriptFunction *func)
{
// Read number of instructions
asUINT numInstructions = ReadEncodedUInt();

// Reserve some space for the instructions
func->byteCode.Allocate(numInstructions, 0);

asUINT pos = 0;
while( numInstructions ) //in my env, this could be 400K or more
{
asBYTE b;
ReadData(&b, 1);

// Allocate the space for the instruction
asUINT len = asBCTypeSize[asBCInfo[b].type];
func->byteCode.SetLength(func->byteCode.GetLength() + len); //too much allocate and memory copy here
asDWORD *bc = func->byteCode.AddressOf() + pos;
pos += len;
................
.................
}[/source]

In my enviroment, I use angelscript to implement a huge database processor.
So in the most time, there are huge arrays.

For example, there is a array initial function which produce 400K bytecode instructions.
It take a long long time to restore bytecode.
Whould you help me to solve this problem?
Thanks,

Lobo Wu Edited by loboWu

Share this post


Link to post
Share on other sites
A single function with 400K instructions? WOW!! :)

I'll look into what can be done to improve the loading speed for something like this. I would obviously have to improve the prediction of the final bytecode size to avoid lots of resizing.

Are you seeing other bottlenecks, than just the resizing of the bytecode buffer?

Share this post


Link to post
Share on other sites
In 2.21.0 the only visible bottleneck is the Garbage Collection.

GarbageCollect(asGC_FULL_CYCLE ) take very long time ( about 15 seconds)

So I use
GarbageCollect(asGC_FULL_CYCLE | asGC_DESTROY_GARBAGE)
and call GarbageCollect(asGC_ONE_STEP) at stated periods.

The mostly parts are excellent. I use angelscript to implemnt my SDK.
And twenty engineers use it to achieve lots of work.
We have preprocessor, compiler, editer, events, object manager .....
Everything is just fine and isn't over designing.
Angelscript is a good library.

By the way, would it be possible to downsize the bytecode?
I use gzip to compress my bytecode files, but there are still too large (totally ~ 1G)

Share this post


Link to post
Share on other sites
That the GC takes time to run a full cycle is natural and I really don't see how to improve this. Still, this is why I've implemented it with the incremental algorithm so you can do what you did, i.e. spread out the execution so it won't impact the performance.

I'll look into the bytecode size, but I don't have any immediate ideas on how to reduce the size any further.

What is the size of the original script code if your compiled bytecode takes up 1GB (compressed)?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this