I’ve created texture for normal RT in Deferred Renderer:
D3D11_TEXTURE2D_DESC dtd {width, height, 1, 1,
DXGI_FORMAT_R11G11B10_FLOAT,
1, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE, 0, 0};
_check_hr(device->CreateTexture2D(&dtd, nullptr, &m_rtNormal));
It seems that normal have some artifacts:
[attachment=30093:Normals.png]
Other render targets for now have 32-bit format DXGI_FORMAT_R8G8B8A8_UNORM.
I only can imagine 3 choices how to improve normals:
1) Use 1.5 RT for Normals: DXGI_FORMAT_R32G32_FLOAT or DXGI_FORMAT_R16G16_FLOAT
X and Y will be in one RT, and Z – in the other.
2) Use 64-bit Render targets. For Normals use 16-bit floats: DXGI_FORMAT_R16G16B16A16_FLOAT
3) Use some math to calculate Z-value on first X and Y. But I want to avoid this approach.
So here are the questions:
Are there other better solutions for floating point normal with better that 11/10-bit per float?
In cases 1 and 2 in second RT there will be unused 16 or 32-bit float value.
Is there a way to represent it as UINT for my own use? How it can be done?
Thanks in advance.