DXGI_FORMAT_R8G8B8_UNORM?

Started by
4 comments, last by Jason Z 9 years, 9 months ago

I am working with directx for the first time in a a long time. I am working on generating input layouts and notices that DXGI_FORMAT_R8G8B8_UNORM doesn't exist. R8, R8G8, and R8G8B8A all exist. For some reason R8G8B8 doesn't. Why is that? Why add support for 1, 2 and 4 normalized byte components but leave out support for 3? It seems like it is a common use case to have a RGB color stored as bytes, yet this is left out.

My current game project Platform RPG
Advertisement

Alignment values for any kind of structure are usually a power of 2 - 1byte, 2bytes, 4bytes, 8bytes, etc...

3byte alignment actually causes a lot of complications for the memory hardware in the GPU. Apparently the hardware designers have decided that possibly wasting up to 25% of memory is better than overcomplicating their memory systems. It's not actually that common though -- most games require translucency, specular, roughness, normal data as well as colours, and this extra data can go into the "extra" channels.

However, some of the compressed formats such as BC1 do support plain RGB without the wasted A channel.

In OpenGL, the API lets you create an RGB8 texture, although it's just lying -- internally it will create an RGBA8 texture and insert the extra 1 padding byte into every pixel that you give it...

He wants to use it for input layouts though, but obviously they somewhat unified texture formats and vertex types from DX10+. I once tried to "alias" a RGBA8 by using overlapping elements to regain that byte, but that doesn't work either since element offsets need to be 4-byte aligned too (Edit: Correction the alignment depends on the format used, e.g. RG8 can be 2-byte aligned).

Conclusion:
  • Don't bother to waste that byte. (Edit: Or use it for something else in your vertex)
  • Use other 4-byte types with higher precision, e.g. R10G10B10A2_UNorm or R11G11B10_Float. The latter is quite peculiar since it's a float type without a sign.


He wants to use it for input layouts though, but obviously they somewhat unified texture formats and vertex types from DX10+.
You can't use 3-byte aligned vertex formats in D3D9 or GL either. D3D9/10/11 just don't have the formats available, and GL will either return an error, or lie to you and insert the padding behind your back wink.png

You can always create your vertex buffer as a structured buffer or byte address buffer, and then us SV_VertexID to manually unpack your data in the shader. However it's almost certainly not going to save you any performance.

You can always create your vertex buffer as a structured buffer or byte address buffer, and then us SV_VertexID to manually unpack your data in the shader. However it's almost certainly not going to save you any performance.

It won't save performance for sure, but it opens lots of doors for you. Indirect draw calls are much easier to do with a structured buffer as the data storage system for example.

This topic is closed to new replies.

Advertisement