Jump to content
  • Advertisement
Sign in to follow this  
hupsilardee

Few questions about DXGI formats

This topic is 2112 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been looking through the DXGI_FORMAT so I can familiarise myself with everything D3D11 has to offer. Now I have a few questions about some of the formats I didn't understand...blink.png

 

1. What the hell does XR_BIAS mean? There was a wealth of Google results, none of which I could even begin to understand. Could someone explain this in small words?

 

2. Why are there no formats with 16 or 8 bits per channel (BPC) and 3 channels? Eg, R32G32B32_FLOAT exists but R16G16B16 does not.

Seems like they'd be very useful for G-buffer organisation alone.

 

3. Why do some formats have 2 identical components? Eg R8G8_B8G8?

 

4. Some formats have a lot of wasted space marked as X - what's the point of this?

 

I think I know the answer to question 4 though. The X32 or X24 formats accompany the depth formats, so you can create a Texture2D with the R24G8_TYPELESS format, then create a DSV for it with D24UNORM_S8UINT to use as a depth buffer, and a SRV with R24UNORM_X8_TYPELESS to read the depth back in another shader (eg SSAO shader), am I right? tongue.png

Edited by hupsilardee

Share this post


Link to post
Share on other sites
Advertisement

2. Why are there no formats with 16 or 8 bits per channel (BPC) and 3 channels? Eg, R32G32B32_FLOAT exists but R16G16B16 does not.

Seems like they'd be very useful for G-buffer organisation alone.

 

Well, I don't know if you have worked with DX9 before, but I can tell you that even there, there is no 48 bit format like you specified. I think the reason being that he GPU is better at working with bit-counts that are a multiple of 2. So that loosing e.g. the 8 bit from a 32 bit format would result in less memory needed, but the processor wouldn't be able to use this texture performantly. AFAIK this has something to do with alignement, which would mean in order to keep the proper alignement you'd have to add 8 more bits, and there you have your 32 again, even if you aren't using e.g. the alpha channel. Though one thing that makes me a bit unsure is that there is in fact 96 bit formats in DXGI, so I might as well be mistaken. I'm still learning most of the "tricks" about GPUs myself, so don't hit me if I'm wrong pls biggrin.png

Edited by Juliean

Share this post


Link to post
Share on other sites

1. It specifies some weird rules for converting from floating point (which your pixel shader outputs) to integer. The rules for converting with XR_BIAS are specified here, and the normal conversion rules are listed here. I honestly have no idea what the XR_BIAS format is useful for.

2. Because no hardware out there supports using the formats that you describe. GPU's still rely on fixed-function hardware for decoding and encoding textures, and they don't just work with arbitrary bit formats.

 

3. I'm not sure about these.

4. An X is there to indicate that the memory footprint of that component exists in the resource, but you can't actually use it.

Share this post


Link to post
Share on other sites

3: The human eye is more sensitive to green than to red or blue, so sometimes it makes sense to allocate more bits to green.

 

Also, in YUV (and similar) color space you may want to use more bits for the Y channel (luma) than the U and V channels. 

Edited by Nik02

Share this post


Link to post
Share on other sites

3. Why do some formats have 2 identical components? Eg R8G8_B8G8?

To quote the MSDN, it's a kind of hardware-demosaiced format. I've never seen it used, but it's probably useful in hardware accelerated video applications, where you use luminance-colour coordinates instead of RGB, and you're ok with having lower resolution-accuracy in the colour components as long as the luminance is full resolution. 

A 16-bit packed RGB format that is analogous to UYVY (U0Y0, V0Y1, U2Y2, etc.). It requires a pixel pair ito properly represent the color value. The first pixel in the pair contains 8 bits of green (in the low 8 bits) and 8 bits of red (in the high 8 bits). The second pixel contains 8 bits of green (in the low 8 bits) and 8 bits of blue (in the high 8 bits). The two pixels share the red and blue components, while each has a unique green component (R0G0, B0G1, R2G2, etc.).
When looking up into a pixel shader, the texture sampler does not normalize the colors; they remain in the range of 0.0f to 255.0f.



1. It specifies some weird rules for converting from floating point (which your pixel shader outputs) to integer. The rules for converting with XR_BIAS are specified here, and the normal conversion rules are listed here. I honestly have no idea what the XR_BIAS format is useful for.

Maybe something to do with TV-safe luminance ranges? Edited by Hodgman

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!