Sign in to follow this  
rick_appleton

glColor3ub, who uses it?

Recommended Posts

Until recently I always used floats for colors. However, I recently started using GLubyte for them. This will naturally save me some space if I have to store a color per vertex. Unless you're doing HDR, the ubyte should be enough. What are you guys using, float or ubyte?

Share this post


Link to post
Share on other sites
I use floats when using OpenGL. Everybody uses floats, so it's sort of a standard. I don't know how the GPU handles color data, but I think it's floats. So, when using ubytes, the GPU (or the OpenGL lib) first translates the ubytes into floats and this kills time. I am not sure, but you could write a benchmark program that uses ubytes and floats and then compare the speed. You should try using 16 bit, 24 bit and 32 bit resolutions to get the big picture.

The memory saving you get when using ubytes is meaningless compared to the (assumed) loss of speed in translating ubytes->floats.

I go with the flow, I use floats. You should not try to fix something that isn't broken.

-richardo

Share this post


Link to post
Share on other sites
As RichardoX hinted at the GPU ONLY understands floats when it comes to data. I think indices are the only excpetion to this rule, everything else is technicaly a 32bit float.
Using something other than floats loses you speed, probably more than you'll gain from packing into bytes memory wise.

End of the day,its upto you what you use, just dont come crying to us if it goes slow [wink]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Unsigned bytes for colors are supported without a speedhit.

Also since the GF1 you can use shorts (16-bit ints) for all components without a speedhit. The current terrain renderer uses shorts for the vertex position and the normals, the rest gets calculated in a simple vertex shader. This brings the size down to a comfy 16bytes per vertex without a speedhit vs the full float precision I used before.
(not sure when the radeons started support for it but it's there in the r300 and up, dunno about the r200 and earlier)

Share this post


Link to post
Share on other sites
Converting an unsigned byte to a float is a simple array lookup. Hardly a speed hit. Make a 256 entry array of floats, initialize it like Array[i] = 1.0f / (float)i.

I use Unsigned bytes for most everything because I am very bandwidth limited and the less I have to send to the card every frame, the better.

Share this post


Link to post
Share on other sites
For colours ubytes are sufficient and they take less memory, save bandwidth and don't incur a performance penalty contrary to what some people have said. You really should use FOUR ubytes (add alpha in other words) though, that's what the hardware supports natively in most cases and the extra byte is normally free due to padding.

In the future, before making performance claims, PROFILE.

Share this post


Link to post
Share on other sites
*shrugs*
the info I got was from the OpenGL performance tuning pdf (see sig) on page 36.

But hey, if you think you know more than NV/ATI, carry on [smile]

Edit:
As a side note, I guess you could say that if you are using glColor3ub then data transfer size is the least of your worries as you are staving the GPU of data anyways, so you might as well send floats as the performance difference between hand feeding a float and hand feeding ubytes is..errm... well... non-existant, as people have already said...

[Edited by - _the_phantom_ on September 2, 2004 3:55:11 PM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Unsigned bytes are a fast path for colors only, this also includes VBO's. Using them for anything other than the primary/secondary color attribute will slow things down.

P.S. The PDF you link to says not to use non-native types like doubles or (u)bytes for normals. It never mentiones the usage of ubytes for colors as a bad thing ;)

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by _the_phantom_
Quote:

VBO Donts :
- Do Not use non-native types
-- double, int, RGB color in ubyte format


Of course you shouldn't use RGB color as ubytes, that's 3 bytes per element, and is about as unaligned as you can possibly get. RGBA as ubytes is aligned to 32bit and perfectly fine performance wise. Current GPUs can read ubytes, shorts and floats natively. And that's valid for VBO, VAR and standard VA. Storing colors as floats is not very smart, it takes four times the storage (if you have an alpha channel), and you get absolutely nothing in return.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this