Jump to content
  • Advertisement
Sign in to follow this  
airatsa

OpenGL Data type and performance

This topic is 3714 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

As is well known, we can use different data types (int, float, double) to define vertex coordinates. The OpenGL driver/implementation contains scene description as vertex/primitive list in it's memory The question is: does the memory consumption and performance of OpenGL driver depends on data type used (especially, float and double)? Thanks in advance

Share this post


Link to post
Share on other sites
Advertisement
Hi,

I've asked the same question time ago while looking for reducing the memory footprint of my OpenGL engine:

Internally the OpenGL driver converts all your data types into the type used by the GPU to perform T&L computations before send it to the GPU's VRAM. This means that if you use short integers for vertexes and the GPU internally uses floats, the driver will convert your shorts into float.
The most common example are colors: independently form how you store them the driver will convert R, G, B and Alpha components to some kind of float.

As common in OpenGL architecture, the real behavior depends on the used implementation of the driver and on the GPU architecture, however it seems that the most common practice is the one described above.

I hope this can help.

Share this post


Link to post
Share on other sites
An addition to LordOrion's statements:

If the driver stores values as floats and converts other data types to floats (which it is likely to do), then providing the values as floats would save you some conversion overhead.

Share this post


Link to post
Share on other sites
if u want the fast path, from a couple of years ago on nvidia use GL_FLOAT for all data, GL_COLOR though u can also use unsigned bytes as theyre optimzed also, ints/doubles etc are discouraged

Share this post


Link to post
Share on other sites
Double are very much out; so much so the GL3.0 spec will (assuming it ever appears) be dropping them as no hardware supports it.

For everything else, think alignment.

For example, if you only have RGB for colour you either want to pad with an extra byte so that the data is properly aligned (32bits in length vs 24bits) or expand up to 4 floats (generally leaving it as bytes is the better solution).

iirc Ints can be used on GF8 hardware as they now support ints natively in hardware.

However, in general, you'll want to stick to floats and bytes for data, using the smallest where you can.

Share this post


Link to post
Share on other sites
Thank you for answers, but they are more theoretcal than practical.

Let me refine my question.

I have an OpenGL application that uses floats for internal data representation. Vertex coordinates are passed to OpenGL as floats too.

But precision of float data type is not enough for my data.
I have two ways of solving of this problem.
(1) The simplest: just use double data type: for internal data and for OpenGL.
(2) The complex one: use some tricks to fit data into float data type.


The memory consumption and performance in case (1) for internal data is not a problem, but what about OpenGL?

Which of (1) and (2) is more acceptable for OpenGL?

Share this post


Link to post
Share on other sites
As previously mentioned; OpenGL hardware does not deal in doubles. Any 'double' data submitted will be converted to floating point before it hits the hardware.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!