Sign in to follow this  
jorgander

Maintaining 32 byte vertex size

Recommended Posts

Is there any way to keep my vertex size to 32 bytes but support positions, normals, 2d texture coordinates, and color? Assuming usual data types that would total 36 bytes (color as 4 unsigned bytes). And if I cannot do that, is it more efficient to add filler to make the size a multiple of 32?

Share this post


Link to post
Share on other sites
You can compress your tex coords so that each axis only takes up one unsigned short, and decompress it in the vertex shader. In this case, that's all you need to do to slide into 32 bytes.

Share this post


Link to post
Share on other sites
Yea, 36 is a really bad size. Although I think the worst size is 44, because that can straddle three separate cache lines in the hardware :-(

You can use any of the base types to represent your data. Signed short is very popular; in DirectX it's SHORT4/SHORT4N and SHORT2/SHORT2N. "Unpacking" typically just means scaling by some factor (which you can hard-code into your engine, even).

For example, this is 32 bytes:

position: SHORT4 (w is 1)
texcoord: SHORT2
normal: SHORT4N (w is 0)
tangent: SHORT4N (w is 0)
color: UBYTE4

You can re-construct the binormal in the shader through cross product.

Share this post


Link to post
Share on other sites
Thanks for the reply. I had thought of that, but using that method would require me to use a vertex shader. I pretty much suspected that, aside from a shader, there would be no way to make 36 bytes equal 32 bytes. And if I recall correctly, glTexCoordPointer maps integer values directly.

Edit: Thanks also to hplus0603, I think my reply addresses your post as well.

Share this post


Link to post
Share on other sites
Assuming that your normals are unit length, you can take advantage of the fact that the sum of the components equals one to store the normal as just two components, and then subtract these from one in the vertex shader to find the third component again. I haven't actually tried this, so no idea how it works in practice.
You could also store the normals as unsigned shorts, just as Promit suggested for the texture coords.

Share this post


Link to post
Share on other sites
Quote:
Original post by jorgander
Thanks for the reply. I had thought of that, but using that method would require me to use a vertex shader. I pretty much suspected that, aside from a shader, there would be no way to make 36 bytes equal 32 bytes. And if I recall correctly, glTexCoordPointer maps integer values directly.

glTexCoordPointer does, but glNormalPointer works just fine with integers.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
glTexCoordPointer does, but glNormalPointer works just fine with integers.


Looking at the docs for glNormalPointer, it says Byte, short, or integer arguments are converted to floating-point with a linear mapping that maps the most positive representable integer value to 1.0, and the most negative representable integer value to -1.0.. I suppose the visual quality would suffer too much if bytes were used, but I think shorts might be ok. Does anyone know if 'most positive/negative representable integer value' is irrespective of the data type? I.e. on a 32-bit system does it always map +1.0 to 0x7FFFFFFF even if bytes or shorts are specified, or does it use 256 for bytes, 32767 for shorts, and so on?

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
You could also store the normals as unsigned shorts, just as Promit suggested for the texture coords.
It turns out that's bigger than necessary. You can collapse each of the three elements of the normal into a single unsigned byte without any loss of visual quality.

Share this post


Link to post
Share on other sites
Quote:
Original post by jorgander
Looking at the docs for glNormalPointer, it says Byte, short, or integer arguments are converted to floating-point with a linear mapping that maps the most positive representable integer value to 1.0, and the most negative representable integer value to -1.0.. I suppose the visual quality would suffer too much if bytes were used, but I think shorts might be ok. Does anyone know if 'most positive/negative representable integer value' is irrespective of the data type? I.e. on a 32-bit system does it always map +1.0 to 0x7FFFFFFF even if bytes or shorts are specified, or does it use 256 for bytes, 32767 for shorts, and so on?

If the mapping was depending on the largest integer type only, accepting shorts and bytes would be quite useless, as you cannot utilize the full range using anything but integers. The conversion is based on the type used; maximum integer value for the type specified is mapped to 1.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this