G-Buffer and Render Target format for Normals

Started by
3 comments, last by Matias Goldberg 8 years, 3 months ago

I’ve created texture for normal RT in Deferred Renderer:


D3D11_TEXTURE2D_DESC dtd {width, height, 1, 1,              
DXGI_FORMAT_R11G11B10_FLOAT,
1, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE, 0, 0};

_check_hr(device->CreateTexture2D(&dtd, nullptr, &m_rtNormal));

It seems that normal have some artifacts:

[attachment=30093:Normals.png]

Other render targets for now have 32-bit format DXGI_FORMAT_R8G8B8A8_UNORM.

I only can imagine 3 choices how to improve normals:
1) Use 1.5 RT for Normals: DXGI_FORMAT_R32G32_FLOAT or DXGI_FORMAT_R16G16_FLOAT
X and Y will be in one RT, and Z – in the other.

2) Use 64-bit Render targets. For Normals use 16-bit floats: DXGI_FORMAT_R16G16B16A16_FLOAT

3) Use some math to calculate Z-value on first X and Y. But I want to avoid this approach.

So here are the questions:
Are there other better solutions for floating point normal with better that 11/10-bit per float?

In cases 1 and 2 in second RT there will be unused 16 or 32-bit float value.
Is there a way to represent it as UINT for my own use? How it can be done?

Thanks in advance.

Advertisement

Check out this thread:

http://www.gamedev.net/topic/673308-pbr-precision-issue-using-gbuffer-rgba16f/

-potential energy is easily made kinetic-

DXGI_FORMAT_R11G11B10_FLOAT

Three partial-precision floating-point numbers encoded into a single 32-bit value (a variant of s10e5, which is sign bit, 10-bit mantissa, and 5-bit biased (15) exponent). There are no sign bits, and there is a 5-bit biased (15) exponent for each channel, 6-bit mantissa for R and G, and a 5-bit mantissa for B, as shown in the following illustration.

First, there is no sign bit. So I suppose negative values become positive or get clamped to 0. You definitely don't want that.
Second, normals are in the [-1; 1] range. You will get much better precision by using DXGI_FORMAT_R10G10B10A2_UNORM which gets you 9 bits for the value and 1 bit for the sign; vs this float format which uses 5 bits for mantissa and 5 for the exponent.

Looks like you made a poor choice of format.

3) Use some math to calculate Z-value on first X and Y. But I want to avoid this approach.

Why? GPUs have plenty of ALU to spare but bandwidth is precious.

Btw there's Crytek's best fit normals that get impressive quality results on just RGB888 RTs

First, there is no sign bit. So I suppose negative values become positive or get clamped to 0. You definitely don't want that.
Second, normals are in the [-1; 1] range. You will get much better precision by using DXGI_FORMAT_R10G10B10A2_UNORM which gets you 9 bits for the value and 1 bit for the sign; vs this float format which uses 5 bits for mantissa and 5 for the exponent.

Looks like you made a poor choice of format.

Thank you, Matias!

DXGI_FORMAT_R10G10B10A2_UNORM removed all the artifacts.

Thank you, Matias!
DXGI_FORMAT_R10G10B10A2_UNORM removed all the artifacts.

I'm glad it worked for you. Just remember that UNORM stores values in the [0; 1] range, so you need to convert by hand your [-1; 1] range to [0; 1] by doing rtt = normal * 0.5f + 0.5f (and then the opposite when reading)

This topic is closed to new replies.

Advertisement