Data type and performance

Started by
5 comments, last by _the_phantom_ 15 years, 12 months ago
As is well known, we can use different data types (int, float, double) to define vertex coordinates. The OpenGL driver/implementation contains scene description as vertex/primitive list in it's memory The question is: does the memory consumption and performance of OpenGL driver depends on data type used (especially, float and double)? Thanks in advance
Advertisement
Hi,

I've asked the same question time ago while looking for reducing the memory footprint of my OpenGL engine:

Internally the OpenGL driver converts all your data types into the type used by the GPU to perform T&L computations before send it to the GPU's VRAM. This means that if you use short integers for vertexes and the GPU internally uses floats, the driver will convert your shorts into float.
The most common example are colors: independently form how you store them the driver will convert R, G, B and Alpha components to some kind of float.

As common in OpenGL architecture, the real behavior depends on the used implementation of the driver and on the GPU architecture, however it seems that the most common practice is the one described above.

I hope this can help.
Bye and Thanks!SiS.Professional Software Developer.
An addition to LordOrion's statements:

If the driver stores values as floats and converts other data types to floats (which it is likely to do), then providing the values as floats would save you some conversion overhead.
If I was helpful, feel free to rate me up ;)If I wasn't and you feel to rate me down, please let me know why!
if u want the fast path, from a couple of years ago on nvidia use GL_FLOAT for all data, GL_COLOR though u can also use unsigned bytes as theyre optimzed also, ints/doubles etc are discouraged
Double are very much out; so much so the GL3.0 spec will (assuming it ever appears) be dropping them as no hardware supports it.

For everything else, think alignment.

For example, if you only have RGB for colour you either want to pad with an extra byte so that the data is properly aligned (32bits in length vs 24bits) or expand up to 4 floats (generally leaving it as bytes is the better solution).

iirc Ints can be used on GF8 hardware as they now support ints natively in hardware.

However, in general, you'll want to stick to floats and bytes for data, using the smallest where you can.
Thank you for answers, but they are more theoretcal than practical.

Let me refine my question.

I have an OpenGL application that uses floats for internal data representation. Vertex coordinates are passed to OpenGL as floats too.

But precision of float data type is not enough for my data.
I have two ways of solving of this problem.
(1) The simplest: just use double data type: for internal data and for OpenGL.
(2) The complex one: use some tricks to fit data into float data type.


The memory consumption and performance in case (1) for internal data is not a problem, but what about OpenGL?

Which of (1) and (2) is more acceptable for OpenGL?
As previously mentioned; OpenGL hardware does not deal in doubles. Any 'double' data submitted will be converted to floating point before it hits the hardware.

This topic is closed to new replies.

Advertisement