Locking functions to timers

Started by
13 comments, last by Hnefi 16 years, 9 months ago
Heya guys! I and X3non (also a member here)are making a game, and we stumbled upon quite a stupid problem :( I want the game logics and graphic output divided into two "threads" (OMG multi threading!) and i had the idea to lock a function for logics to a timer, meaning the logics would be going on and on without any actual logics code (except the locking to the timer) in the main function (WinMain :P). We're using Open GL, if that's of any help. Oh, and i got this idea from the "Allegro" game library, where I successfully have done this (the function is install_int_ex). So, can anyone assist us with this lil' problem here? HALP!
Party!
Advertisement
I'm not exactly sure what you're looking for as, to me, it wasn't very clear. You can do it via multi-threading or you could just call your logic loop ever x milliseconds and your render loop as fast as possible. In our engine you can set both to a set time. Our logic updates every 30 times a second while the renderer we, currently, let run maxed out.

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

Just check to see how much time has passed in your main loop, and if enough time has passed, call your function.
Quote:Original post by Mike2343
...you could just call your logic loop ever x milliseconds and your render loop as fast as possible.


Yes, that was the idea, but do you have any suggestions regarding that?
How can we make our logic loop to be called every x milliseconds, while the rendering loop is "independent".
Can it be done like this:

loop
check if 30 ms have passed, if yes, run the logic
anyway, run the graphics
begin loop from start

The problem here would be if the render would take more than 30 ms, making the game "slower" and complicating the code because we would have to make a "time passed" check to see if we have to put some multipliers in our logics :/
All we wanted were two independent parts of the program: logics and graphics (rendering).
So, any ideas how we could do the idea quoted?

Hope this will clear out some stuff.
Party!
Sounds like you want to "force" a fixed "framerate" for your game logic so that you do not have to multiply your movement speed and other update logic with a "time-passed" multiplier. Personally I'm not very fond of this because, although the game logic might become a tad easier for you (debatable), it is not robust for the reason you provided: what if rendering takes longer than 30ms, or what if the update somehow, due to heavy processor load, is not finished in 30ms, or what if the logic does not happen exactly on those 30ms?

You have little control over those factors if you run your game on different computers. To make it a bit more robust in such situations you could introduce a real thread that does the logic and lock the whole rendering procedure and logic procedure with a mutex. I believe the Windows default time slicing unit is 25ms (from the top of my head), so if your two threads are the only two that are consuming processor power your logic thread is "guaranteed" to be able to get some processor time every 25ms.
Little trick: throw in a Sleep(0) at the end of your rendering/logic thread to force a context switch for greater precision.

In theory this should have the effect you want: (relatively) fixed logic timing, and relatively robust against your rendering taking more than 30ms. But my gut-feeling still says it's better to multiply any game logic with a time-passed factor every frame because that would make the system also robust against any outside factors influencing the timing.
STOP THE PLANET!! I WANT TO GET OFF!!
Personally, I don't like the idea of multiplying everything with a time factor, because if you are running on specs that are below recommended requirements anyway that will cause the game to run even slower. Sure, things will always move at the correct speed, but your framerate will suffer more.

Since you are apparently using WinAPI, I recommend using WinAPI's timers. This gives you the added bonus of the program only using the CPU time it requires to run full speed, leaving CPU resources to do other stuff while playing the game, should you need it for whatever reason.

Doing so looks something like this:
liDueTime.QuadPart=-100000; //100Hz (max)	    hTimer = CreateWaitableTimer(NULL, TRUE, "WaitableTimer");    if (NULL == hTimer)    { //error handling    }	if (!SetWaitableTimer(hTimer, &liDueTime, 0, NULL, NULL, 0))    { //error handling    }		while(!done){		SetWaitableTimer(hTimer, &liDueTime, 0, NULL, NULL, 0);		GameLogic();		GameRender();		if (WaitForSingleObject(hTimer, INFINITE) != WAIT_OBJECT_0)			//error handling	}
-------------Please rate this post if it was useful.
*blink*

Are you the same Hnefi I know from the Supreme Commander official forums?
NextWar: The Quest for Earth available now for Windows Phone 7.
The very same. How many Hnefi's are there on the net with an egocentric Latin signature? ;)
-------------Please rate this post if it was useful.
If you don't care about accuracy of timers, then you can get by with single-threaded model.

Create a sorted queue (can be priority queue as well). When you enqueue an event, set its expiry time as (currTime + duration). On each pass through main loop, scan for all the events which have expired (event.expiryTime < currTime), and call their handlers. The sorted property (or priority queue) ensure that you can find which events have expired very efficiently.

This will give you the resolution of one frame for your timers, which should probably be enough for most purposes.

For high resolution accurate timers, locking, threading, asynchronous programming come into play.

Quote:Original post by Hnefi
Personally, I don't like the idea of multiplying everything with a time factor, because if you are running on specs that are below recommended requirements anyway that will cause the game to run even slower. Sure, things will always move at the correct speed, but your framerate will suffer more.


Are you saying that the possible few thousand extra multiplies required would increase the processing time by some significant amount? How many millions of floating point multiplications can any modern FPU accomplish in under one second?

This topic is closed to new replies.

Advertisement