Archived

This topic is now archived and is closed to further replies.

SIGMA

timeGetTime scale?

Recommended Posts

SIGMA    122
hello, i was just wondering what the timeGetTime() function returns for its time scale. i.e. - how many timeGetTime() units does it take to make a second? thanks, -jesse

Share this post


Link to post
Share on other sites
grady    122
quote:
The timeGetTime function retrieves the system time, in milliseconds. The system time is the time elapsed since Windows was started.


Share this post


Link to post
Share on other sites
ekenslow    122
Actually you should process time as integer deltas rather than convert it to floating point, to avoid floating point precision errors (it''s also much faster).

In other words, just store all your times as DWORDs.


--
Eric

Share this post


Link to post
Share on other sites
SIGMA    122
Okay, if it''s in miliseconds, wouldn''t I divide it by 1,000? Like: DWORD time = timeGetTime()/1000... so if it returned 5000 miliseconds, it''d be 5 seconds? Or am I seriously confused?
-Jesse

Share this post


Link to post
Share on other sites
craymail    122
Yes, you are correct, but if you are planning on dealing with game programming and using timeGetTime() to keep track of the internal fps independent scale, you will not want to deal in seconds (you dont want your frame to change every 5 secs, you want it to change every xxx milliseconds).

Maybe I am just tired and talking crap, but hell

Cray

Share this post


Link to post
Share on other sites
But dividing numbers is very slow, so you might as well convert to floating-point and multiply (by 0.001, which is 1/1000), which together is probably about the same as an integer divide. But that''s just my opinion.

~CGameProgrammer( );

Share this post


Link to post
Share on other sites
foobar    122
just some suggestions you might find helpful or not :

use the milli-seconds unmulitplied and undivided.
divide before you output anything.
btw: use the timeBeginPeriod() to set higher resolution.
default is 5 or more on NT/2000, but 1 is possible on nearly every system.

if you dont need a high resolution, use GetTickCount().
its much faster.

bye,
-- foobar


Edited by - foobar on January 30, 2002 5:03:33 PM

Share this post


Link to post
Share on other sites
craymail    122
CGameProgrammer:

Shouldn''t any _good_ compiler convert any simple division call (say 1/100) to it''s multiplication counterpart. As far as I know they generally do because multiplation is usually faster then division.

My logic anyways

Cray

Share this post


Link to post
Share on other sites
Taulin    100
Speaking of timegettime, I have had something weird happen to me.

Everything is working great, and the average FPS gets spit out into a debug text file at the end.

Oops, I lock up my computer, and reboot (I am using WinME).
I run the program again, but it LOOKs like I am getting about 2 FPS now. I check my text out, and it says I am still getting 120 FPS.

I then proceed to check my calculations for about an hour, then take an hour break after getting no where. I come back later, and
everything looks fine again.

As a test, I reboot my computer, and again, it looks slow, but it is actually still updated fast. My objects are updated
according to time.

I am computing how much time has passed each frame, and my objects use that as their bases of movement. So why would
rebooting my computer effect anything?

Since timegettime gives miliseconds, and I am using only the difference, smaller return values from the function (computer just being turned on) should not make a difference.

I am evidently missing something here. Anyone have any suggestions?

Share this post


Link to post
Share on other sites
S1CA    1418
If the elapsed time is being used to calculate things like rotation angles, you should be aware that some CPUs have maximum input ranges for functions like sin() and cos() - if you are doing something like:

a = time * 0.001f;
c = cos( a );

or even

a += elapsed * 0.001f;
c = cos( a );

Then after a while, the inputs to those functions will be too high.


Also, as mentioned above - don''t put the value from timeGetTime() directly into a float or even cast it (explicityly or implicitly) because the bigger the number gets, the more fractional precision you can lose. Always do the elapsed part as unsigned integer, if you need to cast into a float, cast the result variable into a float.



--
Simon O''''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites
foobar    122
resolution...
if you, say, use GetTickCount(), and query ever 1 millisecond,
youll get values like:
xxx00
xxx00
xxx00
xxx00
xxx00
xxx00
xxx00
xxx18
xxx18
xxx18
xxx18
xxx18
xxx18


and on. means the smallest difference between returned values.
you would expect this to be 1, but on NT/2000 this is not even true for timeGetTime().
thats timer-resolution.

Share this post


Link to post
Share on other sites
quote:
Shouldn''t any _good_ compiler convert any simple division call (say 1/100) to it''s multiplication counterpart. As far as I know they generally do because multiplation is usually faster then division


No. For integers this is impossible, of course, and integer division is what I was talking about. For floats, I think there will be slight precision errors, so it would be left to the programmer to decide whether to multiply or divide. However, operations with constants are resolved at compile-time, so you can multiply by (1/1000)/Font> and it should convert that to 0.001. Useful for more complex numbers.

~CGameProgrammer( );

Share this post


Link to post
Share on other sites
foobar    122
so...
any good compiler will substitute "/ const" by "* const" if you tell him.
there are optimization switches to do so.
and: if you have something like "/4.0f" it can be optimized anyway, because 1.0f/4.0f is an "exact" number.
if you have "/10.0f" things are different ofcourse.

on the rotation-angle stuff:

yes, you should always have a "wrap around" for angles to make them (0 to 2*pi) or (-pi to pi), whatever you prefer.

or, even better use DWORDS, they have "automatic wraparound" and you can easily convert them to float/double just before calling any trigonometric functions with

  
// somewhere in your globals.h:

static const float fDWORD_Range = float(1L<<31)*2.0f; // for dumb compilers ;)

static const float fDWORD_Range = float(1i64<<32); // for smart ones like MSVC++ 6.0 and up


// in your code:

DWORD dw_phi = 1234567; // or any value

float phi = float(dw_phi)*(2.0f*PI/fDWORD_Range); //this is just on fmul == really cheap



bye,
-- foobar

Share this post


Link to post
Share on other sites
SIGMA    122
I found a really good solution for DirectX (or at least a really good solution for me):
float time = DXUtil_Timer(TIMER_GETABSOLUTETIME);
This for some reason all of a sudden seemed to work wonders and now my timebased kinematics work and the fps counter works, it''s great!
-Jesse

Share this post


Link to post
Share on other sites