glColor3ub, who uses it?

Started by
10 comments, last by GameDev.net 19 years, 7 months ago
Until recently I always used floats for colors. However, I recently started using GLubyte for them. This will naturally save me some space if I have to store a color per vertex. Unless you're doing HDR, the ubyte should be enough. What are you guys using, float or ubyte?
Advertisement
I use floats when using OpenGL. Everybody uses floats, so it's sort of a standard. I don't know how the GPU handles color data, but I think it's floats. So, when using ubytes, the GPU (or the OpenGL lib) first translates the ubytes into floats and this kills time. I am not sure, but you could write a benchmark program that uses ubytes and floats and then compare the speed. You should try using 16 bit, 24 bit and 32 bit resolutions to get the big picture.

The memory saving you get when using ubytes is meaningless compared to the (assumed) loss of speed in translating ubytes->floats.

I go with the flow, I use floats. You should not try to fix something that isn't broken.

-richardo
As RichardoX hinted at the GPU ONLY understands floats when it comes to data. I think indices are the only excpetion to this rule, everything else is technicaly a 32bit float.
Using something other than floats loses you speed, probably more than you'll gain from packing into bytes memory wise.

End of the day,its upto you what you use, just dont come crying to us if it goes slow [wink]
Unsigned bytes for colors are supported without a speedhit.

Also since the GF1 you can use shorts (16-bit ints) for all components without a speedhit. The current terrain renderer uses shorts for the vertex position and the normals, the rest gets calculated in a simple vertex shader. This brings the size down to a comfy 16bytes per vertex without a speedhit vs the full float precision I used before.
(not sure when the radeons started support for it but it's there in the r300 and up, dunno about the r200 and earlier)
Converting an unsigned byte to a float is a simple array lookup. Hardly a speed hit. Make a 256 entry array of floats, initialize it like Array = 1.0f / (float)i.

I use Unsigned bytes for most everything because I am very bandwidth limited and the less I have to send to the card every frame, the better.
Waramp.Before you insult a man, walk a mile in his shoes.That way, when you do insult him, you'll be a mile away, and you'll have his shoes.
For colours ubytes are sufficient and they take less memory, save bandwidth and don't incur a performance penalty contrary to what some people have said. You really should use FOUR ubytes (add alpha in other words) though, that's what the hardware supports natively in most cases and the extra byte is normally free due to padding.

In the future, before making performance claims, PROFILE.
*shrugs*
the info I got was from the OpenGL performance tuning pdf (see sig) on page 36.

But hey, if you think you know more than NV/ATI, carry on [smile]

Edit:
As a side note, I guess you could say that if you are using glColor3ub then data transfer size is the least of your worries as you are staving the GPU of data anyways, so you might as well send floats as the performance difference between hand feeding a float and hand feeding ubytes is..errm... well... non-existant, as people have already said...

[Edited by - _the_phantom_ on September 2, 2004 3:55:11 PM]
Certainly true. However, I might convert my pipeline to use ubytes. I assume you can use them with VA and VBO's also.
yes, you can but doing so is a "VBO Dont" for performance reasons (see msg above by me point to pdf and page).
Unsigned bytes are a fast path for colors only, this also includes VBO's. Using them for anything other than the primary/secondary color attribute will slow things down.

P.S. The PDF you link to says not to use non-native types like doubles or (u)bytes for normals. It never mentiones the usage of ubytes for colors as a bad thing ;)

This topic is closed to new replies.

Advertisement