Few questions about DXGI formats

Started by
4 comments, last by Hodgman 10 years, 6 months ago

I've been looking through the DXGI_FORMAT so I can familiarise myself with everything D3D11 has to offer. Now I have a few questions about some of the formats I didn't understand...blink.png

1. What the hell does XR_BIAS mean? There was a wealth of Google results, none of which I could even begin to understand. Could someone explain this in small words?

2. Why are there no formats with 16 or 8 bits per channel (BPC) and 3 channels? Eg, R32G32B32_FLOAT exists but R16G16B16 does not.

Seems like they'd be very useful for G-buffer organisation alone.

3. Why do some formats have 2 identical components? Eg R8G8_B8G8?

4. Some formats have a lot of wasted space marked as X - what's the point of this?

I think I know the answer to question 4 though. The X32 or X24 formats accompany the depth formats, so you can create a Texture2D with the R24G8_TYPELESS format, then create a DSV for it with D24UNORM_S8UINT to use as a depth buffer, and a SRV with R24UNORM_X8_TYPELESS to read the depth back in another shader (eg SSAO shader), am I right? tongue.png

Advertisement

2. Why are there no formats with 16 or 8 bits per channel (BPC) and 3 channels? Eg, R32G32B32_FLOAT exists but R16G16B16 does not.

Seems like they'd be very useful for G-buffer organisation alone.

Well, I don't know if you have worked with DX9 before, but I can tell you that even there, there is no 48 bit format like you specified. I think the reason being that he GPU is better at working with bit-counts that are a multiple of 2. So that loosing e.g. the 8 bit from a 32 bit format would result in less memory needed, but the processor wouldn't be able to use this texture performantly. AFAIK this has something to do with alignement, which would mean in order to keep the proper alignement you'd have to add 8 more bits, and there you have your 32 again, even if you aren't using e.g. the alpha channel. Though one thing that makes me a bit unsure is that there is in fact 96 bit formats in DXGI, so I might as well be mistaken. I'm still learning most of the "tricks" about GPUs myself, so don't hit me if I'm wrong pls biggrin.png

1. It specifies some weird rules for converting from floating point (which your pixel shader outputs) to integer. The rules for converting with XR_BIAS are specified here, and the normal conversion rules are listed here. I honestly have no idea what the XR_BIAS format is useful for.

2. Because no hardware out there supports using the formats that you describe. GPU's still rely on fixed-function hardware for decoding and encoding textures, and they don't just work with arbitrary bit formats.

3. I'm not sure about these.

4. An X is there to indicate that the memory footprint of that component exists in the resource, but you can't actually use it.

XR is extended range, to support a wider area of colors by sacrificing some precision. This page has a comparison table: http://www.techarp.com/showarticle.aspx?artno=645&pgno=6

3: The human eye is more sensitive to green than to red or blue, so sometimes it makes sense to allocate more bits to green.

Also, in YUV (and similar) color space you may want to use more bits for the Y channel (luma) than the U and V channels.

Niko Suni

3. Why do some formats have 2 identical components? Eg R8G8_B8G8?

To quote the MSDN, it's a kind of hardware-demosaiced format. I've never seen it used, but it's probably useful in hardware accelerated video applications, where you use luminance-colour coordinates instead of RGB, and you're ok with having lower resolution-accuracy in the colour components as long as the luminance is full resolution.

A 16-bit packed RGB format that is analogous to UYVY (U0Y0, V0Y1, U2Y2, etc.). It requires a pixel pair ito properly represent the color value. The first pixel in the pair contains 8 bits of green (in the low 8 bits) and 8 bits of red (in the high 8 bits). The second pixel contains 8 bits of green (in the low 8 bits) and 8 bits of blue (in the high 8 bits). The two pixels share the red and blue components, while each has a unique green component (R0G0, B0G1, R2G2, etc.).
When looking up into a pixel shader, the texture sampler does not normalize the colors; they remain in the range of 0.0f to 255.0f.



1. It specifies some weird rules for converting from floating point (which your pixel shader outputs) to integer. The rules for converting with XR_BIAS are specified here, and the normal conversion rules are listed here. I honestly have no idea what the XR_BIAS format is useful for.

Maybe something to do with TV-safe luminance ranges?

This topic is closed to new replies.

Advertisement