///Initialise the Timer
LONGLONG current_time;
DWORD time_counter;
LONGLONG QPF_count;
BOOL QPF_flag = FALSE;
LONGLONG next_time = 0;
LONGLONG last_time = 0;
while(!done)
{
if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
{
if(msg.message==WM_QUIT)
{
done = true;
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else
{
if(QueryPerformanceFrequency((LARGE_INTEGER *)&QPF_count))
{
QPF_flag = TRUE;
time_counter = (DWORD)QPF_count/60;
QueryPerformanceCounter((LARGE_INTEGER *)&next_time);
}
else
{
QPF_flag = FALSE;
next_time = timeGetTime();
time_counter = 60;
}
if (QPF_flag)
{
QueryPerformanceCounter((LARGE_INTEGER *)¤t_time);
}
else
{
current_time = timeGetTime();
}
if (current_time > next_time)
{
last_time = current_time;
Game::GetInstance()->Update();
#ifndef _CONSOLEONLY_
Game::GetInstance()->Render();
Game::GetInstance()->Swap();
#endif
next_time = current_time + time_counter;
}
}
} //Game loop
How can I add a limiter to this to lock my logical update to a fixed timestep eg 40fps
I was thinking it would be along the lines of if ( current_time - last_time > 1/FRAME_RATE ) however what format are these timings in? milliseconds?
I look forward to hearing from you!
Thanks
Chris
QueryPerformanceTimer & Fixed Timestep?
Hi all I'm currently playing about with a really old framework I made in uni to make a cloth simulation using verlet integration. I have the cloth working and looking good but i really *need* a fixed timestep to get sensible results.
The code I am using is handling its timings using QueryPerformanceTimer() which was written for us, however I have no idea what is really going on here.
The code is as follows :-
This code is already set up for a limited update rate, except that one of the code blocks is in the wrong place.
The first if/else block that tries to use the performance counter and then falls back to timegettime should only be entered once, before the main loop. time_counter is the number of performance counter ticks to wait between frames, or the number of milliseconds to wait if there was no performance counter. Another bug is that the way it's currently written, with the performance counter the target framerate will be 60 fps (performance counter ticks per second divided by 60 = ticks per 1/60th of a second), but the timegettime is set up to wait 60 ms between frames.
The first if/else block that tries to use the performance counter and then falls back to timegettime should only be entered once, before the main loop. time_counter is the number of performance counter ticks to wait between frames, or the number of milliseconds to wait if there was no performance counter. Another bug is that the way it's currently written, with the performance counter the target framerate will be 60 fps (performance counter ticks per second divided by 60 = ticks per 1/60th of a second), but the timegettime is set up to wait 60 ms between frames.
Hi vorpy, thanks for the quick response! I can see the issue with the first bug however could you possibly explain the second 'bug' which you have identified a little better?
Hi again Vorpy, sorry I was reading your post incorrectly I think I understand where you are coming from now.
So if QueryPerformanceTimer is used, then the code will run correctly at 60fps ( assuming I have moved the if statement :) ) however if it falls back to the other function it is working in milliseconds instead of fps.
So basically I need the time_counter in this instance to be the millisecond equivelant of 60fps which of the top of my head is about 16.7 ?
Thanks :D
So if QueryPerformanceTimer is used, then the code will run correctly at 60fps ( assuming I have moved the if statement :) ) however if it falls back to the other function it is working in milliseconds instead of fps.
So basically I need the time_counter in this instance to be the millisecond equivelant of 60fps which of the top of my head is about 16.7 ?
Thanks :D
Yes, except that the timing is done using integer types instead of floating point types.
With the performance counter, the fps will be slightly different than 60 fps, depending on how much resolution it has, because of the truncating division.
There's also no way to check if 16.7 milliseconds have gone by with timeGetTime since it only has millisecond resolution, so depending on how you rounded it you'd get somewhere around 59 or 62 fps. The performance counter will do the same thing, but if it has higher resolution (say, 10,000 ticks per second) then the error will be even lower. For a cloth simulation I don't think a few frames per second are going to matter.
With timeGetTime, the frequency is always 1000 Hz, so really I think the intended line was 1000/60. Either that or it was intended to drop to 16.7 fps when there's no performance counter.
If you really wanted to get fancy, you could add some code to accumulate the errors and adjust the wait time accordingly. In the case of the timeGetTime fallback, some versions of windows might only give you 10 ms resolution, and even the performance counter could have weird issues, I don't know. Timing across different versions of windows is really screwy.
With the performance counter, the fps will be slightly different than 60 fps, depending on how much resolution it has, because of the truncating division.
There's also no way to check if 16.7 milliseconds have gone by with timeGetTime since it only has millisecond resolution, so depending on how you rounded it you'd get somewhere around 59 or 62 fps. The performance counter will do the same thing, but if it has higher resolution (say, 10,000 ticks per second) then the error will be even lower. For a cloth simulation I don't think a few frames per second are going to matter.
With timeGetTime, the frequency is always 1000 Hz, so really I think the intended line was 1000/60. Either that or it was intended to drop to 16.7 fps when there's no performance counter.
If you really wanted to get fancy, you could add some code to accumulate the errors and adjust the wait time accordingly. In the case of the timeGetTime fallback, some versions of windows might only give you 10 ms resolution, and even the performance counter could have weird issues, I don't know. Timing across different versions of windows is really screwy.
Do you need to call QueryPerformanceFrequency each time through the loop? Doesn't the frequency remain constant throughout the duration of the process? Indeed, the doc for that function says: The frequency cannot change while the system is running.
What I wonder is if QueryPerformanceCounter/Frequency uses the APIC timer if it is available rather than the TSC register. Seems to me this would avoid all the multiprocessor/multicore issues.
The APIC timer is not universally supported, and reported to be quite buggy :) I've never seen it used. You probably mean the ACPI PM timer instead. However, RDTSC is definitely better (faster, higher resolution), so the OS is correct in choosing it. The thing is, they're just messing up the implementation (it is possible for the OS to make RDTSC perfectly safe).
See timing thread and article/code therein for details.
See timing thread and article/code therein for details.
Indeed, I was thinking of the ACPI PM timer, not the APIC timer. I know Linux supports using it as a high res timer. I'm wondering if Windows XP or Vista also supports it. If one has a cpu that correctly implements the TSC and doesn't alter its frequency when switching into a lower power mode, or otherwise randomly stall RDSTC, then the TSC is clearly superior. And that's the problem, it can be difficult for an application to determine if that is the case. Which is why I'd rather sacrifice a tiny bit of speed due to higher access time in order ensure near absolute accuracy and consistency. In either case you are getting a timer resolution on the order of a microsecond or better.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement