Sign in to follow this  

Long uptime (float or double for time)

This topic is 3412 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Does anyone have experience in writing real-time systems that are designed for long up-times? Specifically, I'm looking for advice on how to handle the timer. I've got a mostly-finished system built on a commercial game engine, which has got to ship in a few months, but only *now* have I been told that my component (the real-time graphics part) has to be able to run "for a long time", such as a week. My problem is that after about 6 or 7 days, lots of components within the engine start getting weird delta values. Delta-time (elapsed frame time) is either measured as ~0ms or ~60ms - no other values ever show up (whereas it's usually anywhere between 10ms and 60ms). I'm pretty sure I can fix this just by substituting all float time variables for doubles, but this means modifying a large portion of the game engine. If there are any other solutions, please let me know! To check my hypothesis, I wrote this little test application to have a guess at how much time can go by before a float is no longer accurate at measuring delta-time. As expected, things really start falling apart after 6 or 7 days, but I was surprised to see that even after 1 day significant errors start showing up.
                       Days    0    1    2    3    4    5    6    7    8
                                                                        
Frame error in ms @ 15  FPS:   0    3    4    4    4    4    4    4    4
Frame error in ms @ 30  FPS:   0    2    2    2    2    2    2   29   29
Frame error in ms @ 60  FPS:   0    1    1    1   14   14   14   16   16
Frame error in ms @ 100 FPS:   0    2    5    5    9    9    9    9    9
                                                                        
Delta should be 66ms, but is: 66   70   62   62   62   62   62   62   62
Delta should be 33ms, but is: 33   31   31   31   31   31   31   62   62
Delta should be 16ms, but is: 16   15   15   15   31   31   31    0    0
Delta should be 9ms,  but is:  9    7   15   15    0    0    0    0    0
      for( int test = 0; test < 4; test++ )
      {
            float fps;
            switch( test )
            {
            case 0: fps = 15.0f; break;
            case 1: fps = 30.0f; break;
            case 2: fps = 60.0f; break;
            case 3: fps = 100.0f; break;
            };
            float frameTime = 1 / fps;
            int frameTimeMS = int(frameTime*1000);
            std::cout << "-= Testing "<<fps<<" FPS =-"<<std::endl;
            std::cout << " actual Delta = "<<frameTimeMS<<"ms"<<std::endl<<std::endl;
            for( int days = 0; days < 10; days++ )
            {
                  float time1 = 60*60*24*days + 0.5f;
                  float time2 = time1 + frameTime;
                  float delta = time2 - time1;
                  float error = fabsf(frameTime - delta);
                  int deltaMS = int(delta * 1000);
                  int errorMS = int(error * 1000);
                  std::cout << std::setprecision(32);
                  std::cout << "after "<<days<<" days:"<<std::endl;
                  std::cout << "mesured Delta = "<<deltaMS<<"ms"<<std::endl;
                  std::cout << "error = "<<errorMS<<"ms"<<std::endl;
                  std::cout << std::endl;
            }
      }

Share this post


Link to post
Share on other sites
If you only need time differences between one frame and the next do the time calculations using integers, and convert the delta to a float at the end. That will always be accurate.

If you need to store several days at high precision then a double might be appropriate.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodgman
My problem is that after about 6 or 7 days, lots of components within the engine start getting weird delta values. Delta-time (elapsed frame time) is either measured as ~0ms or ~60ms - no other values ever show up (whereas it's usually anywhere between 10ms and 60ms).

[...]
To check my hypothesis, I wrote this little test application to have a guess at how much time can go by before a float is no longer accurate at measuring delta-time. As expected, things really start falling apart after 6 or 7 days, but I was surprised to see that even after 1 day significant errors start showing up.

Hold on a second, there are a few strange assumptions here.

Firstly, there's quite a difference between accumulating gradual errors on one hand, and seeing everything show up as zero or sixty on the other. They don't look like the same problem. Or if they are the same problem, there's an intermediate step which is making matters worse that is nothing really to do with the precision.

Secondly, the output you've shown is not the output from the test application you wrote! How can we know the values you show are reasonable?

Thirdly, if delta time is variable, then you have no baseline to compare against so can't make any assumption about error. 7 days or 70 days down the line wouldn't matter. And if it's fixed, then you can use an error-free system such as storing the reciprocal or just correcting periodically (eg. doing a fixed-point calculation once to determine the amount of error and adding that on, like a leap year mechanism).

More details please!

Share this post


Link to post
Share on other sites
Quote:
I'm pretty sure I can fix this just by substituting all float time variables for doubles, but this means modifying a large portion of the game engine.

A little late now, but a single "typedef" would save a lot of hassle here. While going through, it might be worth it to put it in, just in case you want to switch it back later. I find it also makes the code a little more clear -- "TickTime" or something similar is more descriptive than "float", even if they both operate the same.


I'm inclined to agree with Adam_42 that small increments in time should be handled via an integer type: the problem you're experiencing is related to the inherent precision problems of floating point numbers. They'll be fairly accurate for nice, middle-range values, provided they aren't operated on too much, but if you get to extremely large or small values or do a lot of operations on them, they'll get farther and farther off base; your small deltas rounding to zero, for example.

If there isn't a good base unit (generally milliseconds in this case) than fine, you do what you have to, but if possible, you should really prefer to use an integer. Even if you don't have a function in your OS that returns a non-floating point, you could at least use it as the value passed into the subsystems for greater accuracy in those sections -- just convert the returned value to an int before passing it in. Then you'd only have to change to a double in the main loop, since the values being used by the rest of the system are still in the same general range.

Share this post


Link to post
Share on other sites
Quote:
If you need to store several days at high precision then a double might be appropriate.


In one week there are 7*24*60*60*1000 milliseconds = 604800000
The maxiumum value of an unsigned 64 bit integer = 18446744073709551616

In other words, a 64 bit integer can store time at millisecond resolution for 585 million years - unless you can think of a scenario where your uptime would be higher(!) a uint64_t would be a good choice.

Share this post


Link to post
Share on other sites
Quote:
Original post by Nitage
Quote:
If you need to store several days at high precision then a double might be appropriate.


In one week there are 7*24*60*60*1000 milliseconds = 604800000
The maxiumum value of an unsigned 64 bit integer = 18446744073709551616

In other words, a 64 bit integer can store time at millisecond resolution for 585 million years - unless you can think of a scenario where your uptime would be higher(!) a uint64_t would be a good choice.


It's that sort of thinking that caused the Y2K bug. Banks are slow to change their software, who's to say they won't be using the same computers in half a billion years? To be really safe, you should go for the 128 bit unsigned integer. That way you can outlast the lifespan of the universe. By then, those banks should be upgraded to at least some Pentium 4 mainframes.

Share this post


Link to post
Share on other sites
boost posix ptime is 64 bit on my 32 bit machine, might be worth looking at if you need the extra helper functionality. this was some random testing code i wrote which gives a sense of the api flavour.

cout << "now utc (micro) " << microsec_clock::universal_time() << endl;
cout << "now utc " << boost::posix_time::second_clock::universal_time() << endl;
cout << "now local " << microsec_clock::local_time() << endl;
std::cout << "size of ptime " << sizeof( boost::posix_time::ptime ) << std::endl;
ptime t2 = ptime( date( 2001, 8, 21), hours( 11) + minutes( 57) + seconds( 1)) ;
cout << "t2 " << t2 << endl;
cout << "t2 + 1 min " << (t2 + minutes( 1)) << endl;
tm td_tm = to_tm( t2);
time_t hh = mktime( &td_tm);
time_duration td = hours(1) + seconds(10); // ok works
time_duration td = milliseconds( 500); // ok works




Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Firstly, there's quite a difference between accumulating gradual errors on one hand, and seeing everything show up as zero or sixty on the other. They don't look like the same problem. Or if they are the same problem, there's an intermediate step which is making matters worse that is nothing really to do with the precision.
My test program demonstrates the problem -
1) store a large number in a float #1
2) store the #1 plus a small number in float #2
3) measure the difference between #2 and #1 - it does not match the original small number.

It seems the larger the number that you put in #1, the less accurate these differences become. Eventually, when #1 gets big enough, every small number ends up being measured as being the same small number.

Quote:
Secondly, the output you've shown is not the output from the test application you wrote! How can we know the values you show are reasonable?
Yeah I cbf putting nice foramtting code into the test, so I used Excel to re-format it before posting here (so you could read it). I double-checked this data against the original data - it's the same but more readable. If you still don't trust my reformatting skills you could run the code yourself.

Quote:
Thirdly, if delta time is variable, then you have no baseline to compare against so can't make any assumption about error. 7 days or 70 days down the line wouldn't matter.

Delta time is variable - it's the amount of time taken to perform logic updates and scene rendering. We don't use a fixed time-step.
If I graph the delta time it usually looks like it's constantly varying by a small amount (like a windows task manager graph), something like:

/\ /\ /\
/ \ /\/ | /\/ \ /\
\| |/ \/ \/\
But after 6/7 days it begins to look like:

/\ /--\ /-\ /\
|| | | | | ||
||___| |_______| |__||_

In other words - my baseline is that under normal conditions, each frame takes a slightly different amount of time. In the error case, each frame either takes one of two possible time values. My test program provides an explanation for this which lines up with my observations.

Share this post


Link to post
Share on other sites
Quote:
Original post by Adam_42
If you only need time differences between one frame and the next do the time calculations using integers, and convert the delta to a float at the end. That will always be accurate.
This would be a good solution, but the amount of work required to re-design the game engine is prohibitive for something this different.
At the moment, everything is designed to work with absolute time values - if a particular component requires a delta value, then they currently calculate it themselves by storing/subtracting the previous absolute value. This means I'd have to change/test scores of classes.

Quote:
Original post by Victarus
A little late now, but a single "typedef" would save a lot of hassle here. While going through, it might be worth it to put it in, just in case you want to switch it back later. I find it also makes the code a little more clear -- "TickTime" or something similar is more descriptive than "float", even if they both operate the same.
Yeah, this is what I'm currently doing. It's still a *lot* of work to refactor all of the time-related code to use my new type though. I'm mostly done, but now I've got hundreds of compile warnings (for things like a float-based Vec3 being multiplied by a delta time), and I've broken the serialisation routines for a few classes (they're reading too many bits from the data streams).

Quote:
Even if you don't have a function in your OS that returns a non-floating point, you could at least use it as the value passed into the subsystems for greater accuracy in those sections -- just convert the returned value to an int before passing it in. Then you'd only have to change to a double in the main loop, since the values being used by the rest of the system are still in the same general range.
That could work... There are certain components that are more sensitive to incorrect timing than others, which I could dedicate my time to updating in this way.

[Edited by - Hodgman on August 11, 2008 7:15:00 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodgman
Quote:
Original post by Kylotan
Firstly, there's quite a difference between accumulating gradual errors on one hand, and seeing everything show up as zero or sixty on the other. They don't look like the same problem. Or if they are the same problem, there's an intermediate step which is making matters worse that is nothing really to do with the precision.
My test program demonstrates the problem -
1) store a large number in a float #1
2) store the #1 plus a small number in float #2
3) measure the difference between #2 and #1 - it does not match the original small number.


Yes, that describes exactly the way floating point numbers work. Read The C++ FAQ, 29.16-18.

Share this post


Link to post
Share on other sites
I've also found an annoying "feature" of DirectX. When creating a D3D object, it sets some kind of flag in the FPU forcing all floating-point calculations to be single-precision (unless you specifically tell D3D *not* to do this).
This meant that even after modifying the game engine to use double's, everything still behaved as if it were using floats!

Does anyone have any info on this magic FPU flag?

Quote:
Original post by Numsgil
Yes, that describes exactly the way floating point numbers work. Read The C++ FAQ, 29.16-18.

Normally this isn't a problem though, because the imprecision is usually very small. It only becomes a problem when calculating the difference between two very large but slightly different numbers. In that case, the difference seems to converge to a small set of possible values.

Share this post


Link to post
Share on other sites
Quote:

Normally this isn't a problem though, because the imprecision is usually very small. It only becomes a problem when calculating the difference between two very large but slightly different numbers. In that case, the difference seems to converge to a small set of possible values.

Or when comparing very small numbers. The great thing about ints is that they have the same precsision/resolution over their entire range.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodgman
Quote:
Secondly, the output you've shown is not the output from the test application you wrote! How can we know the values you show are reasonable?
Yeah I cbf putting nice foramtting code into the test, so I used Excel to re-format it before posting here (so you could read it). I double-checked this data against the original data - it's the same but more readable. If you still don't trust my reformatting skills you could run the code yourself.

It's not that I don't trust your reformatting skills, just that it would be preferable to have data which can be compared directly to your test case to see if there's anything you overlooked.

But yeah, apparently once you approach 7 significant decimal digits, floats become less useful. That may be your problem, depending on how you're actually using the values. Abandoning the use of float-based absolute time when calculating your deltas seems the best idea.

As for the DirectX flag mangling thing, you might be after this function.

Share this post


Link to post
Share on other sites
Create some kind of time-value type that you typedef to double for now. Later, you can rework your system by creating a class that stores time values as a fixed-point value of whatever precision you desire (or, alternately, stores absolute time values as integers and delta time values as doubles) and that way you can completely eliminate any precision problems. You can fix the warnings by defining custom operators for your time-value class that correctly handle multiplying with float vectors (by casting the results to floats, etc).

And yes, floats (when stored in IEEE single-precision 32-bit format) have 23 bits of mantissa so they can store almost 7 significant digits since log10(223) ~= 6.92

Share this post


Link to post
Share on other sites
You should be storing absolute time in an integer, whether its ms or ns.

Windows has the high-performance counter API which uses a LARGE_INTEGER (64-bit) to store the tick-count (given by the timer's resolution). I'm not sure if or how UNIX operating systems implement high-performance counters though.

Even if its in counts of 10ns, that 5,850,000 years is enough range for your program...

Its what the delta is that you be storing in floats. Unless you understand the accuracy of floats properly your timer will never work. Using doubles will help but when its working properly floats should still be fine.

Note: this code works on Windows in a VC++ compiler. It calculates delta-time in seconds (delta-count/count-freq).

// Declare and initialise counters
LARGE_INTEGER freq, old, current;
QueryPerformanceFrequency (&freq);
QueryPerformanceCounter (¤t);
old.QuadPart = current.QuadPart; // QuadPart is the whole int64

// Application loop
while (app_running == true) {
/* perform frame code */

// update counter
QueryPerformanceCounter (¤t);

// get delta (before converting to double) then divide double delta (ticks) by frequency (as a double)
double dt = double (current.QuadPart - old.QuadPart) / double (freq.QuadPart);

// copy current counter to old for next frame
old.QuadPart = current.QuadPart;

// print fps (1/dt)
printf ("FPS: %.2f\n", 1 / dt);
}

Share this post


Link to post
Share on other sites

This topic is 3412 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this