[D3D11] Why A8R8G8B8 turns into DXGI_FORMAT_B8G8R8A8_UNORM

This topic is 2185 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi all,

I'm playing around with D3D11 for Windows 8 Metro and I'm loading a DDS into a staging ID3D11Resource that I map to a D3D11_MAPPED_SUBRESOURCE so I can access the color information. When I get the D3D11_TEXTURE2D_DESC it reports the Format as DXGI_FORMAT_B8G8R8A8_UNORM but when I created the DDS it was in A8R8G8B8.

Can someone explain why it changes and if its guaranteed to always change consistently to B8G8R8A8?

Thanks,

Share on other sites
Could you, please, paste such a DDS file? I'd like to verify this bug. Do you think it's got something to do with the new SDK?

Share on other sites
Sure I've attached it. It's just a file that contains 2 pixels that was saved as A8R8G8B8 since I was playing around to figure out how the color information was being surfaced. Based on the DDSTextureLoader.cpp that they've provided it seems to be determining the DXGI_FORMAT based of the DDS_PIXELFormat. Here is the snippet of code that is returning the DXGI_FORMAT:

 static DXGI_FORMAT GetDXGIFormat( const DDS_PIXELFORMAT& ddpf ) { if (ddpf.flags & DDS_RGB) { // Note that sRGB formats are written using the "DX10" extended header switch (ddpf.RGBBitCount) { case 32: if (ISBITMASK(0x000000ff,0x0000ff00,0x00ff0000,0xff000000)) { return DXGI_FORMAT_R8G8B8A8_UNORM; } if (ISBITMASK(0x00ff0000,0x0000ff00,0x000000ff,0xff000000)) { return DXGI_FORMAT_B8G8R8A8_UNORM; } if (ISBITMASK(0x00ff0000,0x0000ff00,0x000000ff,0x00000000)) { return DXGI_FORMAT_B8G8R8X8_UNORM; } 

Here is the DDS_PIXELFORMAT for the attached .dds:

ddpf =
size: 32
flags: 65
fourCC: 0
RGBBitCount: 32

When I step through the code it's the second if that is returning the format. This is all a little over my head since i'm not really sure what it's even checking for.

What I assume though is that because i'm using their code i'm at least guaranteed that the .dds I've attached will always come back with that format...please correct me if i'm wrong.

Thanks,

Share on other sites
I think A8R8G8B8 is just "name" for the format, while B8G8R8A8 represents how bits are actually stored in the value.

However this is not a fact, just my personal conclusion, noticed this myself as well.

Share on other sites
This one's easy. See: http://en.wikipedia....GBA_color_space

The terms ARGB, ARGB32, or ARGB8888 are often used, for example, in Macromedia products terminology or the Silverlight framework. The term BGRA may also be seen. This represents the order that the bytes for each channel are actually in memory when ARGB32 data is stored by a little-endian CPU (such as Intel processors).[/quote]

With DDS files the A8R8G8B8 format is just a hangover from D3D9 and earlier days where that form of the terminology was used (and where - if you poked into the actual data - you'd find that it too was stored in BGRA order). Edited by mhagain

thanks guys

• 9
• 13
• 41
• 15
• 12