OpenGL gameloop timing with SwapBuffers

This topic is 3631 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hello, I recently started looking into timers for my gameloop, using WinAPI with performance counters and the multimedia timer and OpenGL. Using timers seems to be the neat way to make a gameloop. My timer passes a message to my window every 1 ms. With every message, which I called ticks, I update my game logic and perform rendering or buffer swapping as needed. The problem I am currently running into is finding the right time to swap my buffers when VSync is enabled. As I understand it so far, when VSync is enabled, SwapBuffers will block until the hardware has done the swapping, which is when the display device is ready to receive the new video information. Therefore, I should be able to get the time right after my last call to SwapBuffers and use this to approximate the next time the buffers will be swapped by adding 1/60 s. I then subtract about 1-2 ms to ensure no frame is missed and use this value to find when to call SwapBuffers again. This runs just fine for about 2 seconds, while my loop starts spending more and more time calling SwapBuffers, even though I am recomputing the time with every call. After a while, every tick ends up calling SwapBuffers, setting the amount of ticks per second to 60 instead of 1000. Is there something I don't know about performance counters, multimedia timers or the SwapBuffers function that might cause this behaviour? I am quite sure the implementation of my idea is correct. Increasing the value of 1-2 ms to 3 ms, makes it slow down more quickly, probably since it makes it spend more time on the swap call. I have also tried approximating the time for the next bufferswap by simply incrementing by 1/60 s, instead of getting the time after the swap and adding 1/60 s. The behaviour is the same, although it remains stable for a longer time. Any ideas are appreciated. Thank you in advance and sorry for the long post. Ignifex [Edited by - Ignifex on October 13, 2008 7:07:55 AM]

Share on other sites
I just noticed my CPU usage increases overtime, until it reaches the maximum. I am guessing this is because SwapBuffers is taking more time, but it seems I cannot get the amount of time it takes to call it. When timing before and after the call, the difference is less than 1 ms.
Any ideas?

Share on other sites
performance-counters take 1000-3000 cycles per call.
I guess you create infinite loops for waiting 1ms. This is completely unhealthy.
Sleep(time<16ms) also does that.

You can only get 1ms timing precision without any system-overload.... with an ASIO soundcard issuing IRQs to fetch more sound-data.

So, really never play with 1ms timing. Here's how to do things correctly:

static int TimePerFrame=16;// in millisecondsvoid ProcessFrame(){	for(int i=0;i<TimePerFrame;i++){		// animate objects, stuff	}			// now draw		// now end frame	EndFrame();}void EndFrame(){	static int time1=0;	//------[ message-pump ]------------------------[	MSG	msg;	while(PeekMessage(&msg,NULL,0,0,PM_REMOVE)){		if(msg.message==WM_QUIT)GameShouldExit=true;		else{			TranslateMessage(&msg);			DispatchMessage(&msg);		}	}	//---------------------------------------------/		SwapBuffers();		//---[ compute time ]-------------------[	int curtime = GetTickCount();	if(time1) TimePerFrame=curtime-time1;	time1 = curtime;	//--------------------------------------/}

About SwapBuffers taking <1ms, it's likely vsync is off. Turn it on via
wglSwapIntervalEXT(1); // this proc should be dynamically loaded.

Share on other sites
I am using a multimedia timer to get my main function to run every 1 ms. My timer succesfully posts a message to my main window every ms. There, I simply use a loop with GetMessage, which is where I save my CPU time.

I am enabling vsync in my program using the wglSwapIntervalEXT function (checking with wglGetSwapIntervalEXT, so I am certain it is enabled).

I know of the other ways to construct a gameloop, as this is not my first time making one. Timers are, for as far as I have learned, a neater way of constructing it, because of the ability to save CPU time while maintaining timing precision. Consider this as more of an experiment and not a question of how gameloops are usually done.

Share on other sites
u wouldnt want to do this

static int TimePerFrame=16;// in milliseconds
void ProcessFrame(){
for(int i=0;i<TimePerFrame;i++){
// animate objects, stuff
}

// update 16 times each frame!!!!

heres a link to a fixed time loop
http://www.flipcode.com/archives/Main_Loop_with_Fixed_Time_Steps.shtml
(this is what I use)

the other way is a variable time loop, eg where u scale things depending on how long the last frame took

>>Timers are, for as far as I have learned, a neater way of constructing it, because of the ability to save CPU time while maintaining

I'ld avoid timers,
just stick a sleep command in eg 5msec(when necessary, perhaps every loop) to save CPU

Share on other sites
I finally figured out the solution to my problem. It appears calling glFinish before SwapBuffers is important for timing (at least on my system). Otherwise the blocking may occur at another point in the program. I'm not sure whether it is guaranteed to do so now though...
My program is running steady at 60 FPS with vsync, allowing for other logic every ms. My CPU usage is <1% (without any actual gamecomponents of course :) ).

I understand timers are not always the best choice for a gameloop. Especially in FPS and the like, framerate independent logic and physics is preferable.
In simpler games, like the 2D action/adventure game I'm now working on, this kind of loop might be a good choice.

1. 1
2. 2
3. 3
Rutin
18
4. 4
JoeJ
14
5. 5

• 14
• 10
• 23
• 9
• 47
• Forum Statistics

• Total Topics
632636
• Total Posts
3007574
• Who's Online (See full list)

There are no registered users currently online

×