Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

ChilledOut

OpenGL OpenGL frame timing

This topic is 6734 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is more of a Windows question than an OpenGL question, but here''s the skinny. I am trying to animate with OpenGL rendering and I have no clue how to get access to a decent clock by which to synchronize my frames. I tried using SetTimer ( Null, 0, 16, TimerProc);. My goal was to have my animation timing capable of ~60fps since it''s close to the monitor refresh rate. I expected my TimerProc to be called every 60th of a second ON the 60th of a second, like a hardware interrupt. I now realize this isn''t the behavior I get at all. So how the heck so I do high resolution timing in Windows?! I did it in DOS using an interrupt hook but my books say this is a no-no in Windows. Said books also offer no usable alternative. Help?! PS. Right now I am achieving 60fps by synching to the monitor rate and just blasting out the frames, letting OGL slow my program down. But this is crap cuz moitor refresh rates change with resolutions, and also when the frame rate dips my program goes sluggish.

Share this post


Link to post
Share on other sites
Advertisement
All right, a little addendum to my problem.

I''ve modified my code to use calls to GetTickCount instead of a timer callback, and it works a little better (Actually runs at intended rate instead of about 20% of intended speed!)

Alas, GetTickCount just ain''t accurate enough! The frames of animation are relatively even, but there is enough inaccuracy to cause a visual herky-jerky effect. Is there no way I can access the internal hardware timing interrupts ala DOS?

Share this post


Link to post
Share on other sites
Use the function

DWORD timeGetTime(void);

defined in WindowsMultimedia (remember to link with ''Winmm.lib'')

the function returns time elapsed from Windows start in ms and it''s quite accurate (about 1ms).

See some help to see how change the timing accuracy (but I think it''s enough for your purpose)

Share this post


Link to post
Share on other sites
You may also want to consider QueryPerformanceCounter. Though it''s alleged not *always* to be available--the function can fail--it never does, and it provides much higher resolution than GetTickCount.

Use this to get the number of ticks per millisecond:
__int64 sys_tick_rate;
QueryPerformanceFrequency((LARGE_INTEGER*)&sys_tick_rate);

And use this to get the current tick:
__int64 curr_tick;
QueryPerformanceCounter((LARGE_INTEGER*)&curr_tick);

The other alternative is to use the RDTSC (read timestamp counter) instruction:

__int32 dwlow, dwhigh;
__int64 curr_tick:

__asm
{
rdtsc
mov dwlow, eax
mov dwhigh, edx
}

curr_tick = (eint64)dwhigh;
curr_tick = (eint64)dwlow / (eint64)curr_tick << 32;

And to find out what the tick frequency is...well...the only way I can think of for now is to get the current tick, delay for a millisecond, get the current tick again, then find the difference. If anyone knows a better way to do this...

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!