Int vs. Float Graphics Programming

Started by
7 comments, last by arnero 5 years, 6 months ago

Hello everyone,

While I do have a B.S. in Game Development, I am currently unable to answer a very basic programming question. In the eyes of OpenGL, does it make any difference if the program uses integers or floats? More specifically, characters, models, and other items have coordinates. Right now, I am very tempted to use integers for the coordinates. The two reasons for this are accuracy and perhaps optimizing calculations. If multiplying two floats is more expensive in the eyes of the CPU, then this is a very powerful reason to not use floats to contain the positions, or vectors of game objects. 

Please forgive me for my naivette, and not knowing the preferences of the GPU. I hope this thread becomes a way for us to learn how to program better as a community.

 

-Kevin

Warm regards,

Perez the Programmer

Advertisement

Then floats is the clear winner, when the program communicates with OpenGL. Thank you for opening my eyes to this fact.

Warm regards,

Perez the Programmer

36 minutes ago, PerezPrograms said:

Right now, I am very tempted to use integers for the coordinates

When it comes to internal usage as well... This is not a good idea in many cases, due to e.g. movement speed and timesteps. You might convert/snap positions to be aligned to a grid in many cases for display/logical checks, but it's useful to have the raw position be without more rounding/inaccuracies than needed.

Hello to all my stalkers.

GPUs traditionally couldn't do integer math at all - only floating-point and some fixed-function fixed-point HW!

It's only recently that GPUs have become competent at integer math. In some situations, fixed point math on 32bit integers can be more precise that 32bit float math, so some engine actually do use 4x4 transformation matrices of integers instead of floats... But this is definitely a "weird" out of the box approach and not something that's normal. 

Incidentally I thought about the same question for the last few days. The following answer is what I came up with:

2 hours ago, fleabay said:

OpenGL only knows values between -1 and 1 in coordinate space so that gives you 3 ints to work with.

 

While it is true that there are only 3 integer values between -1 and 1 it is still possible to get a much higher resolution. Just think of UNORM and SNORM. UNORM means integers are normalized values in the range [0,1] and SNORM in the range [-1, 1]. e.g.: Using this you can subdivide the range into 2^32 parts with a 32-bit integer and not just 3.

The big problem is that with this you can not represent values outside of the range. Therefore representing positions with U/SNORM integers does not make much sense. As you are restricted to a very narrow range.

Another more sensible way would be to split one distance unit into smaller ones. For example, you can split one meter into 1 million parts. That will create a very fine-grained grid of possible positions. Snapping to the grid will not really be noticeable. With 32-bit numbers, you get a very big grid(40km x 40km) and with 64-bit integers it is quite enormous(10^10km x 10^10km). At first glance that sounds fine.

The problems with integers are that you want it to be fine-grained, but you still want to be able to calculate with them. The bigger the numbers the bigger the problems. Take floating-point numbers as an example: Adding a very low number to a very large number may lead to no change at all to the large number. Nonetheless, floating-point numbers work well if you are careful.

While floating-point numbers get inaccurate if there is a large discrepancy between the 2 numbers, integers freak out completely if you multiply 2 large numbers.

Let's explain with a little example: With the previously defined grid let us calculate the distance between 2 points. The difference vector is (1*10^6, 2*10^6, 0) which corresponds to (1 meter, 2 meters, 0 meters). We now have to calculate the square root of 10^6^2 + 2*10^6^2 + 0. This results in the square root of 3*10^12. 10^12 is approximately 2^40 which cannot be represented in a 32-bit number and therefore gets cut of after 32-bits and the resulting square root gets some completely unpredictable value. This effectively limits us to 16-bit values which are not very accurate. If there is ever the need to multiply 3 numbers of that scale then we already need 60-bits to represent it which already nearly overflows in a 64-bit integer and our points are just a few meters apart nearly at the center of the world. With bigger numbers the problem gets only worse. A 1km distance means a squared value of approximately 2^60. 10km means 2^66.

Switching to 64-bit values helps a bit but that only gives us safe values up to 32-bit. Another possibility is to cast up to 128-bit numbers for each calculation and cast back down to 64-bit at the end. This could work but 64-bit multiplication is already not easy(is there a 64-bit multiplication instruction on x86-64? I have not found any. Not even in the SSE instructions) and 128-bit calculations are even more complicated. Especially on 32-bit systems. 128-bit multiplication is probably still faster than a double precision floating point multiplication, but for every multiplication we might overflow our value which might spell disaster.

TLDR: I would say the problems in floating point calculations is the lack of precision which is not so bad.
Problems in integer calculations are completely wrong results because of overflows which is very bad.

7 hours ago, PerezPrograms said:

In the eyes of OpenGL, does it make any difference if the program uses integers or floats?

Older hardware (very older that has been used at began of GL and prior of it) has a very slow floating point processing or ever no hardware float point support at all. So for graphics applications fixed point that represented under hood by integers has been widely used. Also fixed point have a absolute error while floating point have a relative error. So ints better fits needs of precious calculations with limited max magnitude becouse using all bits to store a value,  while floats using a part of bits to store a exponent (that is nonsence for calculations with fixed absolute error), so for mantiss it have less bits than ints. For example if you need to move something strictly by 1mm using a int32 you will have a 2^32 precious steps, but 2^32+1 steps will cause  a overflow. For 32bit float that use 8 bits for exp and 24 bits for mantiss it obviously anycase can not exid 2^24 precious steps, while total range with less precission much much higher, while acceptible absolute magnitude of error usualy growth with measured value magnitude. For example when you measure 10 mm distance 0.1mm is can be significant error, but when you mesure 10km distance ever 10mm error usually not so significant and you can store both its values with float, while with fixed you need a different storages that processed with accounting where fixed point placed, and ineroperation of 2 fixed point  values can not be performed with better absolute error then max absolute error of operands without increasing size of storage. So float point is just store position of point for current value and on ineroperations use absolute error that fit to current storage, so make a error relative to magnitude of resulting value. Really older hardware has used 80-bit representation for any 32 64 or 80 bits representation of operands for interoperations and then cut values to required storage precission (that done by fast biwice and coupled with shift operations). Modern hardware additionaly able to operate with 128 bits floats, while result of 64 bits integer multiplication not fit to 64 bit register so have to return result into pair of regs and may overflow 64bit storage (for 8, 16 and 32 bits architectures it was a same problem with inegers production).

2 hours ago, Cararasu said:

I would say the problems in floating point calculations is the lack of precision which is not so bad.
Problems in integer calculations are completely wrong results because of overflows which is very bad.

Just integers (and fixed point buit do it) intended to calculations with limited absolute error and low limits of value magnitude, while floating point intended to much higher limits of magnitude where relative error is acceptible. Problems usualy began where something used for other purposes than it initially intended for.

#define if(a) if((a) && rand()%100)

Rotation => sine and cosine => rounding errors
Regardless of fixed or floating point.

Multiplication: Calculation of exponent and mantissa run parallel

Addition:

This seems so messy for floating point for me that I would implement a deferred sum, where all summands are sorted first.
Propagation of the carry bit:  ADD after ADD is no problem, but  CMP stalls the pipeline

z-buffer is integer based. Near and far clipping plane is needed to clip off values beyond the integer range.

I still wonder if the guard band is needed due to rounding errors. Using float64 and these far and near clipping planes, it should be no problem to cut every tri in the space between the outermost pixel of the screen and the first pixel outside of the screen. Using a guard band is like going from QuickSort to Bubblesort. From O(ln) to O(linear). With good hardware this should never give a benefit (just considering the switch costs more than just doing ln). With ISA limitations on the CPU there maybe a threshold of like 10 pixels where one would switch.

>some engine actually do use 4x4 transformation matrices of integers
I like this.
Also artificial neural network matrix hardware did this with much larger matrices. Did this stuff not just have a come back?

This topic is closed to new replies.

Advertisement