Maintaining 32 byte vertex size

Started by
8 comments, last by Brother Bob 17 years, 7 months ago
Is there any way to keep my vertex size to 32 bytes but support positions, normals, 2d texture coordinates, and color? Assuming usual data types that would total 36 bytes (color as 4 unsigned bytes). And if I cannot do that, is it more efficient to add filler to make the size a multiple of 32?
Advertisement
You can compress your tex coords so that each axis only takes up one unsigned short, and decompress it in the vertex shader. In this case, that's all you need to do to slide into 32 bytes.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Yea, 36 is a really bad size. Although I think the worst size is 44, because that can straddle three separate cache lines in the hardware :-(

You can use any of the base types to represent your data. Signed short is very popular; in DirectX it's SHORT4/SHORT4N and SHORT2/SHORT2N. "Unpacking" typically just means scaling by some factor (which you can hard-code into your engine, even).

For example, this is 32 bytes:

position: SHORT4 (w is 1)
texcoord: SHORT2
normal: SHORT4N (w is 0)
tangent: SHORT4N (w is 0)
color: UBYTE4

You can re-construct the binormal in the shader through cross product.
enum Bool { True, False, FileNotFound };
Thanks for the reply. I had thought of that, but using that method would require me to use a vertex shader. I pretty much suspected that, aside from a shader, there would be no way to make 36 bytes equal 32 bytes. And if I recall correctly, glTexCoordPointer maps integer values directly.

Edit: Thanks also to hplus0603, I think my reply addresses your post as well.
Assuming that your normals are unit length, you can take advantage of the fact that the sum of the components equals one to store the normal as just two components, and then subtract these from one in the vertex shader to find the third component again. I haven't actually tried this, so no idea how it works in practice.
You could also store the normals as unsigned shorts, just as Promit suggested for the texture coords.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Original post by jorgander
Thanks for the reply. I had thought of that, but using that method would require me to use a vertex shader. I pretty much suspected that, aside from a shader, there would be no way to make 36 bytes equal 32 bytes. And if I recall correctly, glTexCoordPointer maps integer values directly.

glTexCoordPointer does, but glNormalPointer works just fine with integers.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Original post by swiftcoder
glTexCoordPointer does, but glNormalPointer works just fine with integers.


Looking at the docs for glNormalPointer, it says Byte, short, or integer arguments are converted to floating-point with a linear mapping that maps the most positive representable integer value to 1.0, and the most negative representable integer value to -1.0.. I suppose the visual quality would suffer too much if bytes were used, but I think shorts might be ok. Does anyone know if 'most positive/negative representable integer value' is irrespective of the data type? I.e. on a 32-bit system does it always map +1.0 to 0x7FFFFFFF even if bytes or shorts are specified, or does it use 256 for bytes, 32767 for shorts, and so on?
Quote:Original post by swiftcoder
You could also store the normals as unsigned shorts, just as Promit suggested for the texture coords.
It turns out that's bigger than necessary. You can collapse each of the three elements of the normal into a single unsigned byte without any loss of visual quality.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
I have 56bytes in my vertex struct and I don't see any slow downs vs. 44bytes... I add tangent, bitangent vectors to my struct and no fps drops...
Quote:Original post by jorgander
Looking at the docs for glNormalPointer, it says Byte, short, or integer arguments are converted to floating-point with a linear mapping that maps the most positive representable integer value to 1.0, and the most negative representable integer value to -1.0.. I suppose the visual quality would suffer too much if bytes were used, but I think shorts might be ok. Does anyone know if 'most positive/negative representable integer value' is irrespective of the data type? I.e. on a 32-bit system does it always map +1.0 to 0x7FFFFFFF even if bytes or shorts are specified, or does it use 256 for bytes, 32767 for shorts, and so on?

If the mapping was depending on the largest integer type only, accepting shorts and bytes would be quite useless, as you cannot utilize the full range using anything but integers. The conversion is based on the type used; maximum integer value for the type specified is mapped to 1.

This topic is closed to new replies.

Advertisement