PBR precision issue using GBuffer RGBA16F

Started by
15 comments, last by JohnnyCode 8 years, 4 months ago

Hi,

I'm using a GBuffer of type RGBA16F, normals and other informations are stored directly without encoding.

Here the issue, more visible because low roughness used : http://www.zupimages.net/up/15/47/racd.png

There is a good way to avoid this issue ?

Thanks

Advertisement
It looks like you're getting precision issues from the normals in your G-Buffer. Fixed point formats will work much better than floating point formats for normals, so you should use that if you can (16-bit SNORM formats are convenient for normals, since they store values of the [-1, 1] range). You can also get better precision with less storage by encoding the normals in a special format.

16F is overkill for almost every attribute.
8bit sRGB is fine for colours, and 8 bit fixed point is fine for most other attributes -- surely your source textures are already in these 8bit encodings?

8 bit fixed point is enough for normals if you use the best-fit lookup table encoding: http://sebh-blog.blogspot.com.au/2010/08/cryteks-best-fit-normals.html
Otherwise, 10bit fixed point is good, or 16bit fixed point should be more than enough for smooth normals.

Octahedral encoding is a favorite of mine at the moment, and I found that dithering (adding noise) to the normals actually goes a long way to fighting against these banding artifacts.
I'm trying to store the normal and the tangent in the gbuffer (which favors quaternion-based encodings), but here's a dump of my experiments in finding a decent quality level smile.png http://pastebin.com/6FTSPmJ3

Here's a GIF of the 8-bit encodings I was testing -- the Crytek BFN function looks almost identical to the true (unquantized) normal: http://i.imgur.com/HO1fS5Z.gif
The function names shown in the GIF are for compressing a full Normal+Tangent pair, so:
TBN_4x8_2 is actually 2x8bit channels for the normal, plus a sign bit
TBN_6x8 is actually 3x8bit channels for the normal, no special encoding
TBN_6x8BFN is actually 3x8bit channels for the normal, using the Crytek BFN 1024x1024 LUT for encoding.
TBN_6x8Noise is actually 3x8bit channels for the normal, scaling by a random number [0.8,1] before quantizing to 8bit -- try to stupidly exploit the extra "BFN" precision through chance smile.png
TBN_Octahedron4x8 is actually 2x8bit channels for the normal, using the octahedral encoding.
On the larger side -- any 32bit normal encoding looks great -- e.g. 2x16bit octahedral normals are perfect smile.png

RGBA16_SNORM method works good : http://zupimages.net/up/15/47/ftzh.png

Normals are stored in rgb, the alpha is used for decal mask.

Octahedral encoding is a favorite of mine at the moment, and I found that dithering (adding noise) to the normals actually goes a long way to fighting against these banding artifacts.

Interesting ! Can be nice to test that too.

Out of curiosity how come no one suggests using spherical coordinates for G-buffer normals?

-potential energy is easily made kinetic-

Out of curiosity how come no one suggests using spherical coordinates for G-buffer normals?

If you click MJP's link, and then the links on that page, you end up here: http://aras-p.info/texts/CompactNormalStorage.html#method03spherical biggrin.png
To go full cirlce, there's a link on that page pointing back to MJP's blog laugh.png
Short answer is the transformation between Cartesian and spherical is slow (relative to other transforms), and the quality isn't as good as other approaches either. It's definitely a feasible approach though, and I wouldn't be surprised if there's games that have shipped using that technique.

Here's a comparison between spherical and octahedral: https://knarkowicz.wordpress.com/2014/04/16/octahedron-normal-vector-encoding/

Note that if you want good quality normals, you shouldn't use 8-bit channels like in those examples though smile.png

As mentioned at the end of that link, this is a good paper to read: http://jcgt.org/published/0003/02/01/


If you click MJP's link, and then the links on that page, you end up here: http://aras-p.info/texts/CompactNormalStorage.html#method03spherical
To go full cirlce, there's a link on that page pointing back to MJP's blog

I don't know how to read Hodgman, haven't you realized this yet. biggrin.png


Short answer is the transformation between Cartesian and spherical is slow (relative to other transforms)

But is it faster than a (IIRC) cube map lookup for crytek's BFN's?


and the quality isn't as good as other approaches either.

Why? I would think spherical using the same number of bits versus a competing method would be superior.

edit - also


8bit sRGB is fine for colours

What about HDR?

-potential energy is easily made kinetic-

Why? I would think spherical using the same number of bits versus a competing method would be superior.

8-bit channel Images from the knarkowicz link, spherical and octohedral:
spherical1.jpg?w=640&h=621octahedron1.jpg?w=640&h=621
Spherical bunches up precision at certain parts of the sphere. e.g. Look closely at the top of both of them -- the first has circular banding, which is very obvious in some cases. Octohedral more evenly distributes it's precision across the entire sphere, so you get the same quality in all directions (instead of some directions being great and some being shite).
At 16 bits per channel, you can use almost any encoding you like and get good results, so just pick a cheap one smile.png

But is it faster than a (IIRC) cube map lookup for crytek's BFN's?

BFN has a super-cheap decode function, but a moderate encode function (which yep, involves a texture fetch -- I'm currently using this with a 1024*1024 2D lookup texture). The relative costs will depend on the GPU model, and what else your gbuffer shaders are doing at the time...

What about HDR?

You don't generally put HDR colours into a gbuffer.
11_11_10 / 10_10_10 fixed point in linear colour space is about equivalent to 8_8_8 in sRGB space, so IMHO it's not usable for HDR, unless you've got a very small range and are ok with banding artifacts. I've shipped one game where we used 10_10_10 fixed point with a pow2 gamma space to minimize banding, but it wasn't good sad.png
11_11_10 / 10_10_10 floating point is just barely enough to get HDR working, maybe (if your API/GPU supports it).
16_16_16 fixed or floating point is good. Floating point wastes a bit on storing the sign, plus you have to deal with NaN's and infinities sad.png, but it gives you a range from 0 to ~60k with logarithmic precision distribution, which is actually good for HDR data (as your tone-mapper probably has logarithmic weighting). Fixed point means you get that extra bit and don't have to worry about inf/nan, and you get to choose your own maximum scale value (which is a pro and a con).

What about HDR?

You don't generally put HDR colours into a gbuffer.
11_11_10 / 10_10_10 fixed point in linear colour space is about equivalent to 8_8_8 in sRGB space, so IMHO it's not usable for HDR, unless you've got a very small range and are ok with banding artifacts. I've shipped one game where we used 10_10_10 fixed point with a pow2 gamma space to minimize banding, but it wasn't good sad.png
11_11_10 / 10_10_10 floating point is just barely enough to get HDR working, maybe (if your API/GPU supports it).
16_16_16 fixed or floating point is good. Floating point wastes a bit on storing the sign, plus you have to deal with NaN's and infinities sad.png, but it gives you a range from 0 to ~60k with logarithmic precision distribution, which is actually good for HDR data (as your tone-mapper probably has logarithmic weighting). Fixed point means you get that extra bit and don't have to worry about inf/nan, and you get to choose your own maximum scale value (which is a pro and a con).

You can do 11/11/10 or 10/10/10/2 for less "important" things that need HDR like cube maps and water reflections, EG stuff that you can just hope banding won't be noticeable on anyway.


You don't generally put HDR colours into a gbuffer.

Why not?

edit2- because I remember reading that in a 'true' HDR pipeline even the textures are in an HDR format.

edit - also why would spherical encoding have an uneven distribution, I'd think by definition they would be evenly distributed...?

-potential energy is easily made kinetic-

This topic is closed to new replies.

Advertisement