Material parameters in PBR pipeline

Started by
20 comments, last by matt77hias 6 years, 6 months ago
29 minutes ago, Hodgman said:

Ugh, a headache. Still 0-1, but either 8, 10 or 12 bits per channel (and using different RGB wavelengths than normal displays... Requiring a colour rotation if you've generated normal content... And there's also a curve that needs to be applied, similar to sRGB but different).

However, each individual HDR display will map 1.0 to somewhere from 1k nits to 10k nits, which is painfully bright.

You want "traditional white" (aka "paper white") -- e.g. the intensity of a white unlit UI element/background -- to be displayed at about 300 nits, which might be ~0.3 on one HDR display, or 0.03 on another HDR display...

So, you need to query the TV to ask what its max nits level is, and then configure your tonemapper accordingly, so that it will result in "white" coming out at 300 nits and everything "brighter than white" going into the (300, 1000] to (300, 10000] range...

But... HDR displays are allowed to respond to the max-nits query with a value of 0, meaning "don't know", in which case you need to display a calibration UI to the user to empirically discover a suitable gamma adjustment (usually around 1 to 1.1), a suitable value for "paper white", and also what the saturation point / max nits is, so you can put an appropriate soft shoulder in your tonemapper...

I honestly expect a lot of HDR-TV compatible games that ship this year to do a pretty bad job of supporting all this

This is not exactly true, and perfectly wrong for the 0.3 vs 0.03, there is some guarantee here !

HDR capable monitors and TV are doing so according to the HDR10 standard. Under HDR10, the display buffer is at least 10bits with PQ ST2084. This encoding is mapping a range of 0-10000nits. If you want your paper write, just write the equivalent of 100nits, and all TVs should be pretty close to it. The same image in the 0-300nits range would be pretty close on all hardware, it is the point of HDR10. then, as you reach more saturated colors and brightness, you enter the blackbox enigma of tone-mapping each vendor implement. 

If it is true that the max brightness is unknown ( at least on console, we are denied the value, dxgi can report it on pc, but it is to take with a pinch of salt ), the low brightness should just be close to what you are asking. unless you have some larger bright area that is pulling down the rest of the monitor ( to not blow a fuse ). What we do in our game is to behave linear up to 800-900nits and add a soft toe in order to retain colors/intention over the TV tonemap/clamp for the range it would not support well..

The problem with pre-hdr era monitors and tvs is that they already shoot more than paper white, around 300-400nits and people are use to it, having windows in HDR with the 100nits white windows feel very dim ( it is a stupid mistake of Microsoft to not have a brightness slider for people working in bright environment ). But in a game, you do not want to have paper white at 300-400nits, you would lose a 2 stop of dynamic range just to start with, it would be quite stupid and your picture would not match anymore what art has design.

 

 

Advertisement
2 hours ago, galop1n said:

This is not exactly true, and perfectly wrong for the 0.3 vs 0.03, there is some guarantee here !

HDR capable monitors and TV are doing so according to the HDR10 standard. Under HDR10, the display buffer is at least 10bits with PQ ST2084. This encoding is mapping a range of 0-10000nits. If you want your paper write, just write the equivalent of 100nits, and all TVs should be pretty close to it. The same image in the 0-300nits range would be pretty close on all hardware, it is the point of HDR10. then, as you reach more saturated colors and brightness, you enter the blackbox enigma of tone-mapping each vendor implement. 

If it is true that the max brightness is unknown ( at least on console, we are denied the value, dxgi can report it on pc, but it is to take with a pinch of salt ), the low brightness should just be close to what you are asking. unless you have some larger bright area that is pulling down the rest of the monitor ( to not blow a fuse ). What we do in our game is to behave linear up to 800-900nits and add a soft toe in order to retain colors/intention over the TV tonemap/clamp for the range it would not support well..

The problem with pre-hdr era monitors and tvs is that they already shoot more than paper white, around 300-400nits and people are use to it, having windows in HDR with the 100nits white windows feel very dim ( it is a stupid mistake of Microsoft to not have a brightness slider for people working in bright environment ). But in a game, you do not want to have paper white at 300-400nits, you would lose a 2 stop of dynamic range just to start with, it would be quite stupid and your picture would not match anymore what art has design.

 

 

This is the ideal, unfortunately IHVs, bastards that they are, don't necessarily adhere to any spec while advertising "HDR!" and allowing for input as such anyway. The primary one I can think of is Samsung's CHG70 monitors, which don't formally follow HDR10 spec AFAIK. Fortunately for there Freesync 2 is available, so it'll tonemap directly to the monitors space. But it's an example that IHV's don't necessarily give a shit about technical specs or following them at all, especially when marketing gets their hands on a product (just look at Dells absolute bullshit "HDR" monitor from earlier this year).

On 10/3/2017 at 1:05 AM, MJP said:

With a metallic workflow the specular reflectance is fixed for dialectrics (constant IOR), or for metals the specular reflectance is equal to the base color.

If I understand Disney's presentation and code correctly, the used BRDF (diffuse and specular component) is something like this (after stripping it down a bit):

BRDF := F_diffuse  * BRDF_diffuse + F_specular * BRDF_specular

F_diffuse := (1-metallic) * lerp(1, FD90, Fresnel(n_dot_l)) * lerp(1, FD90, Fresnel(n_dot_v))

BRDF_diffuse := base_color/pi

F_specular := lerp(lerp(???, base_color, metallic), (1,1,1), Fresnel(n_dot_h))

BRDF_specular := (G * D) / (4 * n_dot_v * n_dot_l) // Fresnel(n_dot_h) moved to F_specular.

 

@Hodgman @MJP is the ??? factor the 0.3/0.4 you are talking about?

 

I am also still confused about their Fresnel calculation with Schlick's approximation: where is the reflectance gone? The Schlick approximation is defined as F(theta) := F0 + (1-F0)(1-cos(theta))^5. Disney uses (1-F(theta_l))*(1-F_theta_d) although they rather seem to use (1-F(theta_l))*(1-F_theta_v). If we expand the latter (while skipping the first F0 in Schlick's approximation???), we get: (1+(F0-1)(1-cos(theta_l))^5)(1+(F0-1)(1-cos(theta_v))^5). After replacing F0 with FD90, we get our F_diffuse above. My understanding is that they set 1 - lerp(a,b,m) to lerp(b,a,m) which is not correct? Or I am missing some approximations?

 

🧙

So instead of using the BRDF as available here.

I would rather use the following modified format:


float3 CookTorranceBRDFxCos(float3 n, float3 l, float3 v, 
    float3 base_color, float roughness, float metalness) {
    
    const float  alpha   = sqr(roughness);
    const float  n_dot_l = sat_dot(n, l);
    const float  n_dot_v = sat_dot(n, v);
    const float3 h       = HalfDirection(l, v);
    const float  n_dot_h = sat_dot(n, h);
    const float  v_dot_h = sat_dot(v, h);

    const float  Fd90    = F_D90(v_dot_h, roughness);
    const float  FL      = BRDF_F_COMPONENT(n_dot_l, Fd90);
    const float  FV      = BRDF_F_COMPONENT(n_dot_v, Fd90);
    const float  F_diff  = (1.0f - metalness) * (1.0f - FL) * (1.0f - FV);

    const float3 c_spec  = lerp(g_dielectric_F0, base_color, metalness);
    const float3 F_spec  = BRDF_F_COMPONENT(v_dot_h, c_spec);
    const float  D       = BRDF_D_COMPONENT(n_dot_h, alpha);
    const float  V       = BRDF_V_COMPONENT(n_dot_v, n_dot_l, n_dot_h, v_dot_h, alpha);

    const float3 Fd      = F_diff * base_color * g_inv_pi;
    const float3 Fs      = F_spec * 0.25f * D * V;

    return (Fd + Fs) * n_dot_l;
}

Note that instead of calculating the BRDF, I calculate the BRDF multiplied by the cosine factor. Furthermore, I do not use an explicit Geometry (G) component in the Microfacet model, but instead a Visibility (V) component. The V component is equal to the G component divided by the foreshortening terms (n_dot_l * n_dot_v). So the Microfacet model reduces to F * D * V / 4.

F_D90 is equal to (as mentioned in the course):


float F_D90(float v_dot_h, float roughness) {
    return 0.5f + 2.0f * roughness * sqr(v_dot_h);
}

Any thoughts?

🧙

On 10/2/2017 at 8:42 AM, Hodgman said:

Also, specular colour and specular reflection coefficient are usually the same thing.

Just wanted to say, they're not.

While they make similar results, coloured fresnel / IOR tends to lack colour at the borders, unlike specular colour. It's a subtle difference.

5 hours ago, Matias Goldberg said:

Just wanted to say, they're not.

While they make similar results, coloured fresnel / IOR tends to lack colour at the borders, unlike specular colour. It's a subtle difference.

Oh, I plug "specular color" into fresnel as F0 to get white borders, but I've seen other PBR shaders that multiply the result with a "reflection coefficient" to basically dull the entire specular results. I guess this is like a specular occlusion mask?

31 minutes ago, Hodgman said:

Oh, I plug "specular color" into fresnel as F0 to get white borders, but I've seen other PBR shaders that multiply the result with a "reflection coefficient" to basically dull the entire specular results. I guess this is like a specular occlusion mask?

Some BRDFs have both a diffuse and specular color. Some BRDFs seem to remove the specular color and just use a Fresnel component. Apparently, the specular color can be used to calculate this Fresnel component.

Actually I have seen Cook-Torrance with and without an explicit specular color.

🧙

Yeah now that you mention it, I remember reading a paper on trying to recreate real world measured data with cook-torrence, and their error numbers were smallest when they allowed coloured F0 and a coloured multiplier over the entire specular result (instead of just one or the other).

I've never actually seen this model used in games though :o

4 hours ago, Hodgman said:

Oh, I plug "specular color" into fresnel as F0 to get white borders, but I've seen other PBR shaders that multiply the result with a "reflection coefficient" to basically dull the entire specular results. I guess this is like a specular occlusion mask?

Yes.

This was discussed (but for some reason the blogpost was removed, likely in server migration) in Filmic Worlds' website. Fortunately Web Archive remembers.

Also twitter discussion: https://twitter.com/olanom/status/444116562430029825

 

19 hours ago, matt77hias said:

So instead of using the BRDF as available here.

I would rather use the following modified format:



float3 CookTorranceBRDFxCos(float3 n, float3 l, float3 v, 
    float3 base_color, float roughness, float metalness) {
    
    const float  alpha   = sqr(roughness);
    const float  n_dot_l = sat_dot(n, l);
    const float  n_dot_v = sat_dot(n, v);
    const float3 h       = HalfDirection(l, v);
    const float  n_dot_h = sat_dot(n, h);
    const float  v_dot_h = sat_dot(v, h);

    const float  Fd90    = F_D90(v_dot_h, roughness);
    const float  FL      = BRDF_F_COMPONENT(n_dot_l, Fd90);
    const float  FV      = BRDF_F_COMPONENT(n_dot_v, Fd90);
    const float  F_diff  = (1.0f - metalness) * (1.0f - FL) * (1.0f - FV);

    const float3 c_spec  = lerp(g_dielectric_F0, base_color, metalness);
    const float3 F_spec  = BRDF_F_COMPONENT(v_dot_h, c_spec);
    const float  D       = BRDF_D_COMPONENT(n_dot_h, alpha);
    const float  V       = BRDF_V_COMPONENT(n_dot_v, n_dot_l, n_dot_h, v_dot_h, alpha);

    const float3 Fd      = F_diff * base_color * g_inv_pi;
    const float3 Fs      = F_spec * 0.25f * D * V;

    return (Fd + Fs) * n_dot_l;
}

Note that instead of calculating the BRDF, I calculate the BRDF multiplied by the cosine factor. Furthermore, I do not use an explicit Geometry (G) component in the Microfacet model, but instead a Visibility (V) component. The V component is equal to the G component divided by the foreshortening terms (n_dot_l * n_dot_v). So the Microfacet model reduces to F * D * V / 4.

F_D90 is equal to (as mentioned in the course):



float F_D90(float v_dot_h, float roughness) {
    return 0.5f + 2.0f * roughness * sqr(v_dot_h);
}

Any thoughts?

I never check any of these dot products for equality with zero. I checked for NaNs or Infs by marking such pixels, but didn't found any in a couple of scenes.

The Fresnel effect at glancing angles feels sometimes a bit strange in for instance the Sponza scene. It seems like the walls are wet due to rain or so. (I use Disney's default roughness of 0.5f as my own default roughness if not specified).

🧙

This topic is closed to new replies.

Advertisement