Jump to content
  • Advertisement
Sign in to follow this  
Quat

Sampling Normal from GBuffer

This topic is 2257 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Using Direct3D 11.

I'm doing a full screen pass for my directional sun light (with the sky background stenciled out). Since this is full screen pass, I should be able to sample the gbuffers using point sampling and grab the normal vector. My normal gbuffer is in UNORM format so I do the usual decompression back to [-1, 1].

I am getting specular light aliasing artifacts. If I normalize the vector they go away. However, before I write the normal vector during the gbuffer pass, I normalize it, so why do I need to normalize it again on decompression? Is it because the accuracy lost in 8-bit precision made in non-unit length?

Share this post


Link to post
Share on other sites
Advertisement
The lack of precision shouldn't take it too far away from unit length. How far out does PIX tell you the values being read back are?

There's no harm testing it with a floating point format to rule precision issues out. There's also the option of using DXGI_FORMAT_R10G10B10A2_UNORM as that will give you a couple of extra bits of precision without eating more memory, assuming you don't need the alpha channel.

You should also double check that you're reading back from the correct pixel of the gbuffer, and that texture filtering is set to point.

Note that you don't need the half pixel offset that you would under D3D9. See http://msdn.microsof...v=vs.85%29.aspx

Share this post


Link to post
Share on other sites

Is it because the accuracy lost in 8-bit precision made in non-unit length?


Can you show your compression/decompression code. 8-bit accuracy may not be enough for storing normals (check Crysis 3 implementation if you want to stick with 8-bit buffer). The inaccuracy of normals will show especially with specular hilites.

Check http://aras-p.info/t...malStorage.html for different techniques for storing normals.

First I used "stereographic projection" with RGB10A2-bit buffer, but it had artifacts. With 16-bit floating point buffer the quality is excellent.

Cheers!

Share this post


Link to post
Share on other sites
Compression:

[size="2"][color="#008000"][size="2"][color="#008000"]// Compress to UNORM.
[size="2"]pixel.Normal = float4(0.5f*normalV + 0.5f, 0.0f);

[size="2"]Decompression
[size="2"][size="2"]float3 normalV = 2.0f*NormalMap.SampleLevel(samPoint, screenUV, 0.0f).xyz - 1.0f;

[size="2"]Using DXGI_FORMAT_R16G16B16A16_UNORM fixes the problem. I also tried
[size="2"][size="2"]DXGI_FORMAT_R10G10B10A2_UNORM and I could probably get away with it. However, since I'm storing view space normals I'm tempted to just use R16G16 floating-point.

Share this post


Link to post
Share on other sites
Storing normals as you showed in 8-bit target will give you artefacts as you have noticed. Check the link I provided. There are comparisons between different ways of storing normals. With view space normals you can store them in 2 components instead of 3, to save some memory.

Cheers!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!