Jump to content

  • Log In with Google      Sign In   
  • Create Account


DXGI_FORMAT_R8G8B8_UNORM?

  • You cannot reply to this topic
5 replies to this topic

#1 HappyCoder   Members   -  Reputation: 2365

Like
0Likes
Like

Posted 28 June 2014 - 02:44 AM

I am working with directx for the first time in a a long time. I am working on generating input layouts and notices that DXGI_FORMAT_R8G8B8_UNORM doesn't exist.  R8, R8G8, and R8G8B8A all exist. For some reason R8G8B8 doesn't. Why is that? Why add support for 1, 2 and 4 normalized byte components but leave out support for 3? It seems like it is a common use case to have a RGB color stored as bytes, yet this is left out.


Edited by HappyCoder, 28 June 2014 - 02:45 AM.


Sponsor:

#2 Hodgman   Moderators   -  Reputation: 28476

Like
13Likes
Like

Posted 28 June 2014 - 03:53 AM

Alignment values for any kind of structure are usually a power of 2 - 1byte, 2bytes, 4bytes, 8bytes, etc...

 

3byte alignment actually causes a lot of complications for the memory hardware in the GPU. Apparently the hardware designers have decided that possibly wasting up to 25% of memory is better than overcomplicating their memory systems. It's not actually that common though -- most games require translucency, specular, roughness, normal data as well as colours, and this extra data can go into the "extra" channels.

 

However, some of the compressed formats such as BC1 do support plain RGB without the wasted A channel.

 

In OpenGL, the API lets you create an RGB8 texture, although it's just lying -- internally it will create an RGBA8 texture and insert the extra 1 padding byte into every pixel that you give it...



#3 unbird   Crossbones+   -  Reputation: 4968

Like
4Likes
Like

Posted 28 June 2014 - 04:15 AM

He wants to use it for input layouts though, but obviously they somewhat unified texture formats and vertex types from DX10+. I once tried to "alias" a RGBA8 by using overlapping elements to regain that byte, but that doesn't work either since element offsets need to be 4-byte aligned too (Edit: Correction the alignment depends on the format used, e.g. RG8 can be 2-byte aligned).

Conclusion:
  • Don't bother to waste that byte. (Edit: Or use it for something else in your vertex)
  • Use other 4-byte types with higher precision, e.g. R10G10B10A2_UNorm or R11G11B10_Float. The latter is quite peculiar since it's a float type without a sign.


#4 Hodgman   Moderators   -  Reputation: 28476

Like
2Likes
Like

Posted 29 June 2014 - 08:57 AM


He wants to use it for input layouts though, but obviously they somewhat unified texture formats and vertex types from DX10+.
You can't use 3-byte aligned vertex formats in D3D9 or GL either. D3D9/10/11 just don't have the formats available, and GL will either return an error, or lie to you and insert the padding behind your back wink.png

#5 MJP   Moderators   -  Reputation: 10547

Like
3Likes
Like

Posted 29 June 2014 - 04:06 PM

You can always create your vertex buffer as a structured buffer or byte address buffer, and then us SV_VertexID to manually unpack your data in the shader. However it's almost certainly not going to save you any performance.



#6 Jason Z   Crossbones+   -  Reputation: 4846

Like
2Likes
Like

Posted 29 June 2014 - 05:40 PM

You can always create your vertex buffer as a structured buffer or byte address buffer, and then us SV_VertexID to manually unpack your data in the shader. However it's almost certainly not going to save you any performance.

 

It won't save performance for sure, but it opens lots of doors for you.  Indirect draw calls are much easier to do with a structured buffer as the data storage system for example.







PARTNERS