Questions about storage types

Started by
10 comments, last by XTC 18 years, 8 months ago
Hi I am writing a CSG map editor and I have come to a dilemour. All my CSG brushes are stored with int coordinates. I have been trying to avoid floating point coordinates for all my world objects as I am always suspicious of rounding errors, but now I am implementing a sprite which represents where the 3D camera is in the world on the 2D views. The problem is, camera position is determined by how the user is moving through the world with the mouse, and this needs to be smooth: not in interger (at this point I actually dont keep track of camera ("the OpenGL camera") coordinates in the world, but I will need to when I want to put in a camera path system). I have a base class of *all* entities, with 3 int values for world coordinates, but now i need to change this to double. My question is, is it possible for rounding errors to creep into my data if I have all the geometry stored as doubles, but everything is snapped to an interger grid? I guess what I am saying is that, if I have an object at x=0.0, y=0.0, z=0.0, and only move objects by *whole values*, but perform all computations in double, ie 0.0+16.0, is it possible that rounding errors may still creep in cause of the way CPUs work out floating point maths, and end up with something like 16.000000000000001? Maybe this is a stupid question, but I am always very weary of using doubles or floats. Also, at university we pretty much only ever used int, float and occasionally double in projects either signed or unsigned. Is there any compelling reasons to use short and long? I know what they are, but should I bother to think about whether I should use a long or short instead of just using int? Thanks in advance.
Advertisement
Quote:My question is, is it possible for rounding errors to creep into my data if I have all the geometry stored as doubles, but everything is snapped to an interger grid? I guess what I am saying is that, if I have an object at x=0.0, y=0.0, z=0.0, and only move objects by *whole values*, but perform all computations in double, ie 0.0+16.0, is it possible that rounding errors may still creep in cause of the way CPUs work out floating point maths, and end up with something like 16.000000000000001?


Yes, but would it matter? If yes, would an acceptable solution be to "correct" the value? E.g.:

#include <cmath>
using std::round;

int main () {
double z = 12.0;
z += 1.0;
z = round( z ); //changes 13.1 | 12.9 -> 13.0 - note that 13.0 might not be exactly representable - although there may still be rounding error, it won't "add up", z won't drift towards something like 12.5 if we keep doing math.
}

Quote:Also, at university we pretty much only ever used int, float and occasionally double in projects either signed or unsigned. Is there any compelling reasons to use short and long?


On occasion when you're packed for memory, using short (2 bytes on most systems instead of the 4 bytes of an int (that most systems have)) will reduce the memory overhead in some situations.

Long on 32-bit systems is usually the same size as an int, so using it is not terribly common - that said, some 64-bit compilers have long as an 8-byte size and int remaining at 4-byte, so using long instead may be good...

Quote:I know what they are, but should I bother to think about whether I should use a long or short instead of just using int?


In most cases? My two cents: nah. I use unsigned int for i = [0..3) in loops :-). It'll end up as a register often enough anyways...

Quote:Thanks in advance.


No problem :-).
Quote:Is it possible that rounding errors may still creep in cause of the way CPUs work out floating point maths, and end up with something like 16.000000000000001?

As far as I know: no. If you must be sure then do:
float fa =  1.0f;float fb = 16.0f;float fc = (float)((int)fa + (int)fb);

But to cure a bit of your afraidness of rounding errors: would it hurt so much if the object turned out to be at 16.000000000000001?

Quote:Should I bother to think about whether I should use a long or short instead of just using int?

If there are no bells ringing in your head just use int. There are of course cases where the other types apply (i.e. memory alignment, other size-related issues, reading from files) but you will learn to identify these separate cases. Perhaps one thing to consider is what size of type you need -- but this is more a question of what is optimal.

I do, though, feel you should think about signed/unsigned. Using a signed number to reflect a quantity, for example, seems very strange to me (and it wastes half of the bit space).

Greetz,

Illco
Quote:Original post by MaulingMonkey
Yes, but would it matter? If yes, would an acceptable solution be to "correct" the value? E.g.:

#include <cmath>
using std::round;

int main () {
double z = 12.0;
z += 1.0;
z = round( z ); //changes 13.1 | 12.9 -> 13.0 - note that 13.0 might not be exactly representable - although there may still be rounding error, it won't "add up", z won't drift towards something like 12.5 if we keep doing math.
}
Yeah its the drift that I am worried about, but this will stop it from happening. I should take the time to learn more of the stuff in the std libs.

Thanks a bunch. :)

Also, note that even if you end up with 16.0000001 from 16.0, it will be a uniform discrepancy, so everything set to 16.0 will equal 16.0000001. Clearly not a real big problem, is it? I assume you were worried about cracks and stuff between geomitry and things like that?
Free speech for the living, dead men tell no tales,Your laughing finger will never point again...Omerta!Sing for me now!
That sort of thing yes. Also when doing CSG cuts slight errors can really screw things up if you dont protect against them. Thats why I used int, but I guess now I have to just be careful and round the errors.
Quote:Original post by Mr Lane
Yeah its the drift that I am worried about, but this will stop it from happening. I should take the time to learn more of the stuff in the std libs.

Thanks a bunch. :)


Glad to be of help :-). TBH I had to check myself, looked at the man page for "floor" (the only double-rounding function I could remember off the top of my head) then checked round under "See Also..." :-).

I did just notice however it's part of C99 (from my manpage's "Conforms to..." section), whereas C++ is branched off of C89 AFAIK... so I'm not sure if it's part of the standard C++ libs or not :-(. If it isn't, you can make your own:

double foo = 3.1;
foo = double(int(foo + 0.5)); //note: IIRC, integer rounding is to-zero, so this would break with negatives... for that you'd use:

foo = double(int(foo + ( (foo > 0.0) ? (+0.5) : (-0.5) ) ));

Maybe a more efficient version of the above, not sure myself.
I personally generally use double instead of float, except when performance suffers too much from it or when the extra precission is clearly uneeded.

Since long is usually the same size as int on 32-bit systems, I rarely use long, apart from using it as a mental marker that "this will/may contain a LARGE number".

I usually only use short in order to save bytes in structs intended for disc or network IO. Or as bitfields in special cicumstances.
"Using a signed number to reflect a quantity, for example, seems very strange to
me (and it wastes half of the bit space)."

half a bit? doesnt it waste a whole bit? whatever.

anyways,

Doubles can represents all 32-bit integer values correctly as far as I know (single loses it's precision at around 23-bit or something... I don't know). Even if rounding errors happen, they shouldn't be important unless they add up to a problem. (ie. You multiply your camera's matrix by a rotation matrix as the player looks around instead of recalculating it every frame.)

In most hardware accelerated OpenGL implementations, everything is converted to floats anyways. My GeForce's pixel shaders use 16-bit floats instead of normal 32-bit floats (double=64-bit).

Using short's and char's save memory. Less memory=less bandwidth consumption.
Quote:
"Using a signed number to reflect a quantity, for example, seems very strange to
me (and it wastes half of the bit space)."

half a bit? doesnt it waste a whole bit? whatever.

You will be discarding the one sign bit indeed. Omitting one bit divides your bit space in half as all the patterns possible with the sign bit set to one can also be repeated with the sign bit set to zero. Hence half the space is wasted.

This topic is closed to new replies.

Advertisement