Jump to content
  • Advertisement
Sign in to follow this  

Using one context per object for increased performance

This topic is 2257 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I´ve been using Angelscript for some time now. It´s a great library, even though it took me some time to learn how to use it properly. Nice work!

To the point:

I´m developing an audio application. In it, I have a series of processing objects to manipulate some realtime audio signals. The objects get called one at a time, typically 44100 times per second. Some of these objects have scripting capabilities using.... you guessed! Angelscript! Every object has just one or two functions that are going to be called by the audio thread. Since performance is critical here the objects should do just what is needed to get the job done quickly.

With Angelscript, when calling a script function, what-is-needed is typically getting a context (from a pool), preparing it ( ctx->Prepare(func); ), setting the object pointer ( ctx->setObject(o); ), setting the corresponding arguments ( ctx->SetArgDWord(....) ) and then executing the call. And of course, all the stuff that the script function has to do. And then unpreparing the context.

Now, since the objects only have one or two functions that are going to be called (always the same functions for each object), wouldn´t it be better if each object had its own set of already prepared contexts ready to be called? That would reduce the process of calling the script function to just having to set arguments and execute the call. And of course the script function itself.

I can´t help thinking this could increase performance greatly for this kind of app. Are there any cons to this approach (other than the increased memory requirements)? How much of an increase in memory usage should I expect from this?

Thanks in advance for all your answers.

Share this post


Link to post
Share on other sites
Advertisement
You'll only get a performance boost if the context is used to call the same function over and over again, but if the calls alternate between different functions you will get a boost by keeping a context for each object.

When Prepare() is called with the same function that was just executed by the context, AngelScript can reuse some of that knowledge and skip some validations and allocations of the stack space.

Even when reusing the same context you still need to call Prepare() and SetArg etc for each new execution. AngelScript will not assume the same arguments are to be used.

As for the increase in memory, it really depends on your configurations. By default the context allocates 4KB as the initial stack size, so unless you changed the default settings you will use up that much memory + the size of the actual asCContext object.

If you wish to reduce how much is allocated as the initial stack size you can call engine->SetEngineProperty(asEP_MAX_STACK_SIZE, x); where x is the size in bytes that you need for your scripts. This will also limit how much the stack can grow dynamically.

Don't set it too small though, or your scripts will not be able to finish the execution without a "Stack overflow" script exception from the context.

Also, be sure to check out the article on fine tuning AngelScript to get the most performance out of it. Perhaps the JIT compiler that Blind-Mind studios implemented may be of interest as well.

Let me know how well it goes. Also, if you can provide some profiling on the script calls I'll be happy to try to optimize it further.

Regards,
Andreas

Share this post


Link to post
Share on other sites
Ok, so just to know if I understand this correctly: I have to always call Prepare(), even if I´m calling the exact same function from the exact same context repeatedly, right?. Do I have to Unprepare it before preparing it again?

4kb doesn´t sound too unreasonable... does the stack enlarges dynamically if the function ever needs more space?

I´ll run some tests and do some profiling and I´ll be back with some numbers.

(BTW, I have already been thru the fine tuning section, and also tried the JIT. It gives a nice boost, but hey, a few more free CPU cycles won´t hurt, right?)

Thanks!

Share this post


Link to post
Share on other sites
Yes. Prepare() always needs to be called. This is what resets the internal states for the execution.

It is not necessary to call Unprepare(), before calling Prepare(). Unprepare() is really only useful when the context is to be stored for later use, but it is not known when it will be used. Unprepare() can then release some objects and memory that will not be used.

Yes, the stack grows dynamically as needed, unless you limit the max size with the previously mentioned engine property.

4 KB will allow the scripts to use 1024 local 32bit variables, which should be more than enough for most scripts. You'll probably only use more than that with scripts that use recursive function calls.

Regards,
Andreas

Share this post


Link to post
Share on other sites
I´ve done some tests and yes, having 1 context per object in this situation helps a little - I´ve done some profiling using Very Sleepy and it shows a small performance boost (sampling for about 5 minutes). What tool do you use for profiling? Let me know so I can generate a report using that same tool (unless it´s a multi-million app!)

BTW, the other function that is repeatedly used when calling a funtion script is setObject(). Do I have to set the object pointer everytime I need to call the script, or is it OK to set it once and leave it unchanged? Of course I can try and see if it fails... but even if it works correctly at first doesn´t mean it won´t crash later.

Let me know - and thanks again for this great lib!

Share this post


Link to post
Share on other sites
SetObject() must also be called for each script method that will be executed. Prepare() resets all arguments, including the object pointer, to avoid trash in the input.

Unfortunately I do not have any profiling software. If Very Sleepy can generate some human readable files with the findings, then that will be just fine.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!