DXGI_FORMAT_R8G8B8A8_UNORM

Started by
1 comment, last by Quat 13 years ago
I was using D3DXCOLOR for my color member of my vertex format, but decided to try a 32-bit color. So I specified DXGI_FORMAT_R8G8B8A8_UNORM in the input description, and used a DWORD for the color member in the vertex structure. However, the colors were off compared to when I was using D3DXCOLOR before. After some time, I found out I could not use the format: ARGB (which was used in d3d9). Instead, I had to use ABGR. Can somebody explain this? From the format DXGI_FORMAT_R8G8B8A8_UNORM, I would have thought I actually needed to use the format RGBA. But for some reason I have to use the reverse.
-----Quat
Advertisement
Direct3D 10 defines it’s formats in the order of the color channels in the memory.

D3DXCOLOR generates a DWORD. If you save this dword to the memory it got swizzle because x86 CPUs use little endian. That’s the reason why the D3D 9 macro works incorrect for D3D10.

Direct3D 10 defines it’s formats in the order of the color channels in the memory.

D3DXCOLOR generates a DWORD. If you save this dword to the memory it got swizzle because x86 CPUs use little endian. That’s the reason why the D3D 9 macro works incorrect for D3D10.



So if I have this kind of function:

UINT ARGB2ABGR(UINT argb)

{

BYTE A = (argb >> 24) & 0xff;

BYTE R = (argb >> 16) & 0xff;

BYTE G = (argb >> 8) & 0xff;

BYTE B = (argb >> 0) & 0xff;

return (A << 24) | (B << 16) | (G << 8) | (R << 0);

}

In Big Endian the returned color byte order is the way we would write it in english: ABGR,
but in little endian, the byte order is actually stored in memory as: RGBA?
-----Quat

This topic is closed to new replies.

Advertisement