Archived

This topic is now archived and is closed to further replies.

Fast timer, NOT win32

This topic is 5777 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am programming a 3D engine to support multiple OS''s. So, for Win32 I can use timeGetTime(), etc. But what would I use in Linux? clock() is waaaay too slow! Completely stumped. I''d be very greatful for some info on this. Thanks

Share this post


Link to post
Share on other sites
Have you had a look what's in the Linux time.h header? Some of the functions there might be of use to you.

--
Very simple ideas lie within the reach only of complex minds.

Edited by - SabreMan on February 17, 2002 9:22:47 AM

Share this post


Link to post
Share on other sites
quote:
Original post by dusik
Why, is the time.h header different for Linux than for Windows? I thought it was a standard C header.


Whoops, yes. I was working from memory which often isn''t the most accurate of techniques. Especially when I haven''t used the header in question for over a year!

--
Very simple ideas lie within the reach only of complex minds.

Share this post


Link to post
Share on other sites
i would create a simple function, inline probably. along the lines of this:
  
int checktimer(long timer) {
if (timer <= time()) {
return 1;
}
return 0;
}

then you would use it by first creating a timer variable and then calling the checktimer() to see if the timer had occured:
  
long timer = time() + 1000; // set a timer for in a second

while (checktimer(timer) == 0) { }
printf("Second is over.\n");


(http://www.ironfroggy.com/)
(http://www.ironfroggy.com/pinch)

Share this post


Link to post
Share on other sites
I still don''t see, though, how this would allow me to calculate, say, frames per second. Say, I''ve got 100 FPS. That means, each frame takes 10 milliseconds to render. So I would do something like:

  
void TimerTick()
{
static long prevtime = time();
long currtime = time();

g_FPS = (1.0f / (float)(currtime-prevtime));

prevtime = currtime;
}


That is similar to what I''ve got now (though simplified) but instead of time() I use timeGetTime() using the Win32 API. The reason is, timeGetTime is accurate to 1 millisecon, which is what I need, whereas the functions in (clock(), time()) are way too inaccurate.

Also, if I''m not mistaken, ironfroggy, the code you wrote is a delay, not a measure of time. I need to measure how much time the frame has been rendering, not pause the programme for any amount of time.

But thanks anyway.

Share this post


Link to post
Share on other sites
If it's for PC (wellpost pentium) then can't you just use the RDTSC - read time stamp counter. Something similar must exist on the Mac,etc so you could wrap in it in a function pointer or #define.

It'll give you accuracy to the hertz (as in megahertz).


There's Intel link here:
http://cedar.intel.com/cgi-bin/ids.dll/content/content.jsp?cntKey=Legacy::irtp_RDTSCPM1_12033&cntType=IDS_EDITORIAL
Or just search for RDTSC on google

Edited by - chrisflatley on February 17, 2002 6:43:18 PM

Share this post


Link to post
Share on other sites
  
__int64 __declspec(naked) __fastcall RDTSC()
{
_asm
{
cpuid
rdtsc
ret
}
}


Declare the __int64''s you stuff the result into as volatile __int64.


Problem is you need to first time the CPU using one of the timing function in order to determine the clock MHz - which is needed to measure time with rdtsc. (You need the processor pack for MSVC6 for this to compile).

Magmai Kai Holmlor

"Oh, like you''ve never written buggy code" - Lee

Share this post


Link to post
Share on other sites