PBR precision issue using GBuffer RGBA16F

Started by
15 comments, last by JohnnyCode 8 years, 4 months ago


You don't generally put HDR colours into a gbuffer.

Why not?

edit2- because I remember reading that in a 'true' HDR pipeline even the textures are in an HDR format.

Anything to do with lighting, such as a cubemap or something else you are doing for direct/indirect lighting should, in some way, take HDR into account. But textures, or rather albedo, doesn't as it only takes into account the colorspace you're working in. Which right now is SRGB, though hypothetically one could prepare for DCI/REC 2020 if one wanted to.

Regardless, HDR is for lighting your scene into a higher range than your colorspace/spec can go, then tonemapping/etc. back down into the output range. Theoretically, if we were working in some crazy future colorspace/spec where we had output of a hundred thousand nits instead of the SRGB output of 100 nits, we wouldn't need "HDR" because the output spec would be enough in and of itself to display whatever range of brightnesses we wanted.

Advertisement

Why not?
edit2- because I remember reading that in a 'true' HDR pipeline even the textures are in an HDR format.

When dealing with light energy, you neef HDR formats because energy is unbounded. Emissive maps, light maps, light accumulation buffers, environment maps, etc, all store light energy values from zero to infinity, measured at the red/green/blue wavelengths.
Albedo maps and specular maps store reflection coefficients that are bounded from 0.0 to 1.0. There's no need to use "HDR formats" for those - sRGB 8bit is ideal.

edit - also why would spherical encoding have an uneven distribution, I'd think by definition they would be evenly distributed...?

Picture the Earth with lat/lon lines on it. At any latitude, there's an east/west line circling the globe. At the equator, those circles are large, but as you approach the poles, they get smaller. At the pole itself the circle becomes a point, meaning your longitude coordinate is useless there!
If those are your two coordinates, then the amount of distance covered by the longitude coordinate depends on the latitude coordinate. You can see this circular banding at the poles in the image.


I'm using a GBuffer of type RGBA16F, normals and other informations are stored directly without encoding.

A single byte per component is perfectly sufficient for normals, you just need to not approach it as a 1 byte floating point conversion, use the raw byte and perform rather scale multiplication from 0,255 to -1.0,1.0, or use fixed point format if you are sure it performs well.

but 16 bit precision for depth position of pixel may be not enough, it is still less then real depth which will be at least 24 bit, and that can lead to random artifacts when moving. I have calibrated my Gbuffers to be a multiple render target, such as 4x8bit for color, 4x8bit for normals and specular sample, 1x32F for position.

A single byte per component is perfectly sufficient for normals, you just need to not approach it as a 1 byte floating point conversion, use the raw byte and perform rather scale multiplication from 0,255 to -1.0,1.0, or use fixed point format if you are sure it performs well.

Check out the gif I posted earlier -- all 8bit-component encodings have really obvious banding for mirror-like reflections, except for the Crytek BFN encoding.

I have calibrated my Gbuffers to be a multiple render target, such as 4x8bit for color, 4x8bit for normals and specular sample, 1x32F for position.

You can use a 32F depth-stencil target directly, instead of doubling up by also storing depth in the gbuffer.


Check out the gif I posted earlier -- all 8bit-component encodings have really obvious banding for mirror-like reflections, except for the Crytek BFN encoding.

Hmm, but it seems to me that the band has not much to do with light factor (is it a reflection enviroment?), it can be some special condition, shader +large mapping+ filtering etc. all together.

From my experience, byte per channel is perfectly satisfying for shade of light (not only tangent space normals), undistinguishable from more precise values, even in large zooms. I in the end have even the source normals defined like that in very normal maps.

You can use a 32F depth-stencil target directly, instead of doubling up by also storing depth in the gbuffer.

I rather attached another target, in my case it would under perform my situation, using depth/stencil as input to shaders at some stages.

Hmm, but it seems to me that the band has not much to do with light factor (is it a reflection enviroment?), it can be some special condition, shader +large mapping+ filtering etc. all together.
From my experience, byte per channel is perfectly satisfying for shade of light (not only tangent space normals), undistinguishable from more precise values, even in large zooms. I in the end have even the source normals defined like that in very normal maps.


It's not really a special condition, the precision is just plain bad. The paper that Hodgman linked explains it quite well: for normals, you really only care about a set of values that exist on the surface of a unit sphere. But if you're storing the normal XYZ values directly in fixed point, then your encoding actually covers the entire set of values that lie inside of a unit cube. So essentially a huge range of values are totally useless, thus limiting the effective precision of your normal storage.


So essentially a huge range of values are totally useless, thus limiting the effective precision of your normal storage.

Yes, the cube/sphere precision distribution when a linear scale is used, I sticked to linear straight scale though, quality and definition is great and the encoding is not that computationaly cheap though.

This topic is closed to new replies.

Advertisement