reducing vertex data size

Started by
3 comments, last by joe1024 15 years, 11 months ago
Right now I'm sending vertices that take up 92 bytes each (23 floats, in vectors of 3 and 4 floats). I'd like to make these vertices a little smaller. Most of these floats don't need 32-bits of accuracy; 8-bit fixed point from 0.0 to 1.0 is enough (ie. glColor4ub format). I'm passing all this data to the vertex shader through texture coordinates. According to the OpenGL reference pages, glTexCoordPointer will only take GL_SHORT, GL_INT, GL_FLOAT, or GL_DOUBLE. Is there some extension that lets me pass 8-bit fixed point or unsigned bytes? I could also use glColorPointer, but I only get 1 (maybe 2) colors that way. As for Direct3D, I should be able to specify D3DDECLUSAGE_TEXCOORD with D3DDECLTYPE_UBYTE4N without a problem, right? Any ideas?
Advertisement
Just use generic vertex attributes (glVertexAttribPointer and friends). They let you pass pretty much every imaginable format. They're part of ARB_vertex_program and/or ARB_vertex_shader.
How does one map the generic vertex attributes to Cg shaders?
be aware depending on the card/driver not all formats are optimizied equally thus u might have smaller memory usage but at the cost of speed
While I`m not sure if you can use the D3DDECLTYPE_UBYTE4N flag for D3DDECLUSAGE_TEXCOORD, I`m pretty sure you can decompress the UV coords in a shader for free, since that`s what I`m doing all the time.
Compress them to 16-bit values (range 0-65535), since that`s FAR more than you`ll need and decompress when filling the data to oT0 register.

If you know that some specific objects get small textures (256/512), you could use free bytes (8bit precision) in other parts of vertex format and thus get the UVs into the vertex format for free.

If 8-bit precision is not enough for you, you`ll definitely find usable bits in other parts of vertex format and thus get 9-bit (0-511) or 10-bit (0-1023)precision. But that requires few more instructions to compose it correctly.

But having few more instructions in a shader should be worth it - i.e. not having to spend 8 Bytes for UV coordinates. Remember that these days the cards have 200 stream processors, so there`s no harm spending few more instructions if you can save few MBs in the end.

Plus, if saving 8 Bytes would get your vertex size into cache-friendly size, your performance would actually rise even despite having the longer shader.

And, of course, if you mainly abuse your pixel shader ops, none of the above will make any difference in performance ;-)

This topic is closed to new replies.

Advertisement