Jump to content
  • Advertisement
thefallentree

C++ How to implement an cross platform timer in single threaded app

Recommended Posts

Hi there! I'm currently maintaining an game engine that implements an interpreter VM (https://github.com/fluffos/fluffos if anyone's interested).

this program was written in mostly C style C++ code, and it is single thread with an main-loop that runs events , loads/compile files and execute them in an interpreter.   which is really just an for(pc;eval(pc);pc++) kind of thing. 

Now, there is a feature to limit the time spent running this potentially unstopping loop, to accompany this,  there is an volitle flag variable created globally and during the loop this flag is checked everytime to see if it should break out .  and this flag was set through signal handler.  

1. sigalarm , which has limited timing resolution and not cross-platform 

2. posix timer : timer_create() etc  , which uses monotoic clock provided in linux env, which works better than sigalarm.  

Now I have need to port this to native windows, which obviously don't have these , so I have a few ideas:

1. write a wrapper for gettimeofday()/clock_gettime()/chrono::time functions, and directly check this during the loop,  but the problem is that this will waste a lot of CPU reading and comparing time, which is unacceptable in this performance critical loop. 

2. adding an new thread that basically sleep and wakeup to set the variable: I havn't tried this yet, but wouldn't this only rely on thread scheduler performance? the resolution i want is probably at least 1ms , maybe ~10ms is okay.

3. i read under windows one should use QueryPeforamnceCounter() to get precise time, however this is the same problem with (1), i probably shouldn't keep reading the counter in that tight loop.

So, here is my question, I really just want an cross-platform function that basically says:  run this handler after 10ms,  ideally also preempting all my other threads.  Is it possible? what's the best way to do that?

Thanks

Share this post


Link to post
Share on other sites
Advertisement
4 hours ago, thefallentree said:

So, here is my question, I really just want an cross-platform function that basically says:  run this handler after 10ms,  ideally also preempting all my other threads.  Is it possible? what's the best way to do that?

To my knowledge 16-bit DOS realtime mode was the last OS that allowed installing such interrupts ad that was NOT multithreaded 😊. Joking aside, you said your system is entirely single-threaded. Why do you want to preempt "all other threads"? Simplify the problem to a minimum, as the solution's cost will grow with the problem's complexity.

Generally speaking, in a multithreaded program, you use synchronization objects to achieve what you're describing. For example, a master thread controlling a value that all other threads just watch and sleep on until it is set. To be cross-platform, your best bet is to stick to the std library and pray your platform compilers support the version you need. For the use case above, it seems std::condition_variable from C++ 11 is what you need.

That said, by reading the intro of your post, it seems this is all unnecessary complexity and the problem can be solved in a simpler way if you describe it better or review the design.

Share this post


Link to post
Share on other sites

The question boils down to this: I have a loop

for(;;;) { do_something() }  which potentially can run forever,  and I want to be able to set an time limit for this function , so currently we have this

bool stop = false;
for(;;;) { if(stop) break; do_something()}

and this stop flag is set through an signal callback from timer_create() under linux.  Now I can do this

int start_time=now()
for(;;;) { if (now() > start_time + 10ms) break; do_something() } 

but this is a very tight loop , each do_something() is only about several nano seconds.  now() however using gettimeofday() or similar is very costly , not to mention i also need to support Windows.  

If I were to have another thread that does

sleep(10); stop=true;

I fear the OS thread scheduler make no guarantee how long this thread  is starved and it will run past the deadline. So, I'm back to drawing board on how to let kernel call me back at preciser time, which is what timer_create() is for and there is no equivalent under windows (right?).

Let me know if this is better,Thanks.

Share this post


Link to post
Share on other sites

For a non-console single-threaded Windows app, use SetTimer() and anticipate WM_TIMER in your WndProc. But this already enforces a certain structure on your program which you haven't said much about yet.. so I have to ask, what kind of program is this on Windows? A console app? A window-less app? A service? Otherwise you need to have a message loop under which your app operates rather than it being an endless processing loop. If you have such a message pumping loop, then where does your processing code execute with regards to the message pump?

 

Share this post


Link to post
Share on other sites
Posted (edited)

Is this for a game engine driven by a scripting language, like godot or unity? The onus is usually on the user not to have infinite loops, or performance sapping code. You can easily check for this just with a simple:

bool bQuit = false;

while (!bQuit)
{
  for (int n=0; n<10; n++)
  {
    bQuit = RunVMInstruction();
    if (bQuit)
      break;
  }
  
  if (TimeTooLong() || TooManyInstructions())
    bQuit = true;
}

If a timer check is expensive, surely you can move it out of an inner loop?

Perhaps there is not enough information to properly understand why this is a problem, as Wessam says.

Afaik event based timers involving the OS are often wildly inaccurate (+- lots of milliseconds) so you are usually better off doing this kind of thing yourself.

Even if calling your OS gettime function was that expensive (even outside a loop), you could also just roughly calibrate a number of VM instructions that would run in a given time, and use the instruction counter.

Edited by lawnjelly
misread question

Share this post


Link to post
Share on other sites
On 1/8/2019 at 12:54 AM, lawnjelly said:

 Even if calling your OS gettime function was that expensive (even outside a loop), you could also just roughly calibrate a number of VM instructions that would run in a given time, and use the instruction counter.

Thanks for the reply, 

as far as I can remember, I seems to recall python used an similar approach, and in CPython there is an magic number of instructions executed before an timeout check is performed, sorry that I currently couldn't find the link anywhere.

The more I dive into the timekeeping world the more issues that may arise.  for example, 

1. https://blog.packagecloud.io/eng/2017/03/08/system-calls-are-much-slower-on-ec2/

2. https://stackoverflow.com/questions/44020619/queryperformancecounter-on-multi-core-processor-under-windows-10-behaves-erratic

So my next attempt solution is exactly what you are describing.  Use whatever most precise time keeping method on each platform and then use an magic constant to executing a bunch of instructions in an batch before checking timeout.  This will compensate the cost of the time reading functions. (maybe it should be called batching factor)

Hmm, maybe it is also possible to run an micro-benchmarks on startup , or even during compile time, to get an more precise number

Cheers.

Share this post


Link to post
Share on other sites

Timing on Windows is something that I’ve invested a bit of (no pun intended) time in, and I can offer the following advice based on my own research and experiments:

Use QueryPerformanceCounter. TimeGetTime/GetTimeOfDay etc will not provide the same resolution when called from a loop.  All the major Windows game titles on the market use QPC and have for a long time. As mentioned above, WM_TIMER is not suitable either.

There are processors out in the field right now, manufactured as recently as 2014, that have problems with QPC if Windows scheduler moves the calling thread to another processor core as its being called.  You can work around this, as many major titles do, by calling QPC from a separate thread which has affinity to one processor core.  Calling QPC from multiple threads simultaneously on these processors can also lead to problems.

QPC is a cheap call, and calling it 60,120,240 times a second shouldn’t negatively impact your applications performance.  How many times per second is your inner loop running?  The question you need to ask is, “What’s the slowest rate I can run that loop before it negatively effects my game?”

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!