• Advertisement
Sign in to follow this  

Behavior of energy conserving BRDF

This topic is 2159 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was implementing the normalization factor into my Blinn-Phong BRDF but am sceptical of it's results.
Using the normal BRDF without the normalization factor the result on a flat cube surface looks like this:
http://cl.ly/0j1z1w1W451i2Z0B0x2K

Now using the NormalizationFactor it becomes this:
http://cl.ly/0J2A1Q2I1G1y2J3B2d3a

Using this Code:

// Calculate Normalization Factor for Energy Conserving BRDF
float NormalizationFactor = (material.SpecularPower + 8) / (8 * PI);


// Calculate N dot H
float NH = NormalizationFactor * pow(saturate(dot(N, H)), material.SpecularPower);


So my specular reflection has gotten quite a lot bigger plus it seems to have lost its attenuation somehow.
Is that the correct result of using an energy conservative BRDF ?

Also by using this factor the term "Specular Intensity" as a material property becomes unnecessary I presume ?

Share this post


Link to post
Share on other sites
Advertisement
Hi,

Your normalization looks good, but you have 100% specular light, right now.
If you mix a little more diffuse to it, it should be fine. smile.png

You see, normalized Blinn-Phong is:
fr = Kd * saturate(dot(N,L))/pi + Ks * (n+8)/(8Pi) * pow(saturate(dot(N,H)), n)

Usually you want Kd+Ks<=1, since both integrals over the hemisphere yield 1:
Diffuse: [Formula] \int \frac{N^TL }{\pi} \partial \omega = 1 [/Formula]
Specular: [Formula] \int \frac{n+8}{8\pi} (N^TH) ^n \partial \omega = 1 [/Formula]
If Kd+Ks > 1 then your BRDF returns more radiance than it received irradiance.

PS: Perhaps you'd like to check out Ashikhmin-Shirley and Cook-Torrance. happy.png

Share this post


Link to post
Share on other sites
Ok I'm already dividing my Diffuse Light by PI before adding it to the specular.
The screens above were just the specular term. The complete product looks something like this (with energy conservation):
http://cl.ly/2q0A1s3w1I2n1S3X3133

How would I go about making sure that the sum does not go over 1 ? Do I need to saturate the result or is that already done by the normalization stuff ?

Share this post


Link to post
Share on other sites

Ok I'm already dividing my Diffuse Light by PI before adding it to the specular.

Yeah, I figured that. smile.png
(Then it was just for completeness for the curious reader. happy.png )


How would I go about making sure that the sum does not go over 1 ? Do I need to saturate the result or is that already done by the normalization stuff ?

Kd and Ks are both material parameters and control how much diffuse and specular to add.
fr = Kd * diffuse + Ks * specular.
Diffuse and specular are normalized independently to 1. If you make sure that Kd + Ks <= 1 then everything is fine, since fr<=1 holds as well.

Share this post


Link to post
Share on other sites
So Kd and Ks would actually be the Diffuse and Specular Color ?
Or is that just a float you'd insert in there to control the ratio from the application?

Share this post


Link to post
Share on other sites
Actually it’s both. For one thing it’s a ratio that scales the terms so that the sum is smaller than one (often diffuse and specular map summed up are bigger than one.)
They can also be colors, if you think of it as a weighted diffuse map / weighted specular map, but then it gets a little tricky.

Assume you have a white (incoming) light (1,1,1) and your wall is perfectly diffuse red. To maintain the energy, your wall must actually reflect (3,0,0), not (1,0,0).
So, what you basically do is:
[Formula]\bar{\rho} = (\rho_r + \rho_g + \rho_b)/3 \\\Delta\phi_{rgb}"= \frac{\rho_{rgb}}{\bar{\rho}} \Delta\phi_{rgb} [/Formula]
Sample:
[Formula]\rho_{rgb}=(1,0,0) ~~~ \Delta\phi_{rgb}=(1,1,1) ~~~ \bar{\rho}=1/3 ~~~\Delta\phi_{rgb}"=(3,0,0)[/Formula]
In practice this gives you much more colorful light (after tonemapping). Sometimes it’s too colorful. smile.png So, at times people just lerp the corrected color with the non-corrected to lessen the effect.
It's up to you whether you do this correction.

Share this post


Link to post
Share on other sites
mmmh that math is a little confusing to me tongue.png
How would that translate to hlsl code what you are doing there?

So I understand your example that it has to reflect (3, 0, 0) but what is it you're actually computing there?
Update: But wait wouldn't the resulting vector of (1, 1, 1) and (1, 0, 0) not be (2, 1, 1) instead of (3, 0, 0) ?
That first line is basically getting the average of p (color of the pixel/material?), right ?
So dividing that color by its average and multiplying it by the lightColor is the solution ? But the result would still be (3, 0, 0) or not ? But the idea was to keep it between 0-1 or not ?

Share this post


Link to post
Share on other sites
Okay, let’s stay with that wall sample.

Let’s say we have white light coming in (1,1,1). So actually we can think of it as three photons (1xred, 1xgreen, 1xblue). So the flux (energy) coming in here is actually 3. This means, our output should better be three as well. smile.png

If there is a red diffuse wall and it says, that all the incoming energy is turned red, then the walls output ratio is (1,0,0). If you’d have a yellow wall, it would be (0.5, 0.5, 0), see? The components of the ratio sum up to 1. Multiplied with dot(N,L)/Pi we still stay <1.

With the little formula above there, I took the incoming light ([Formula]\Delta_{rgb}[/Formula]) and distributed it according to the output ratio.
Let me see, if I can dig out some old code (well, I found CUDA code, so I don’t guarantee for HLSL syntax correctness smile.png).
Since so to say three photons came in (1xred, 1xgreen, 1xblue), we threw three photons out (3xred to be more precisely).

// Weighted diffuse map.
float3 Kd = texture2D(ColorSampler, input.TexCoord).rgb * diffuseRatio; // diffuseRatio is a material parameter

// compute diffuse color
float3 diffuse = Kd * saturate(dot(N,L)) / PI;

// preserve energy (optional)
diffuse *= 3.0f / (Kd.r + Kd.g + Kd.b);

// same for specular...
result = diffuse + specular;


I hope that clears this a little up. smile.png

Small sample:
Kd * Light / ((Kd.r+Kd.g+Kd.b) / 3)
= (1,0,0) * (1,1,1) / ( (1+0+0) / 3 )
= (1,0,0) * (1,1,1) * 3 = (3,0,0). Works.

Share this post


Link to post
Share on other sites
In your example you say ((Kd.r + Kd.g + Kd.b) / 3) but in your code it's the other way around (3 / (Kd.r + Kd.g + Kd.b)).
Which one is right tongue.png ?

Anyway thanks a lot for explaining it all :)

Share this post


Link to post
Share on other sites

In your example you say ((Kd.r + Kd.g + Kd.b) / 3) but in your code it's the other way around (3 / (Kd.r + Kd.g + Kd.b)).
Which one is right tongue.png ?

If I’m not mistaken it’s the same.
In the code I multiply with 3 / (Kd.r + Kd.g + Kd.b).
In the equation I divide by (Kd.r + Kd.g + Kd.b) / 3, which is the same as multiplying with the reciprocal (as I've done in the code).

Sorry, for writing it so confusing in the first place. smile.png

Share this post


Link to post
Share on other sites
Well, this strange Kd-renormalization business is something to reconsider. Someone should tell you.

Maybe I'm missing something, but for me it doesn't work out.
Let's say you have (1,1,1), and let's say it's white. Then the renormalization factor is 1, end result (1,1,1). Ok.
Let's say you have (.1,.1,.1), a dark gray. Then the factor is 10, end result (1,1,1) again. So the dark gray turned into white. Probably not what you wanted.

It is also implausible from a physical point of view.
"Three photons come in, three have to go out"? Why? It's perfectly valid for a surface to absorb photons at certain energies, that's why most colored things are colored.
(Let's stick to the photon picture, also it is maybe not ideal in this case.)
With your logic you are converting 2 "photons" of a certain energy into photons of another (quite different) energy just so three come out in the end.
If this effect is strong enough to significantly change the color (energy) of photons it is called fluorescence (or, with time-delay phosphorescence).
This is not something that happens for normal materials to an extent that would be relevant for image generation.


To answer the OP's original question:
The problem is most likely that you are not tone-mapping your image correctly and everything above 1 is simply clamped.
This lets the highlight appear sharper, because part of the soft fall-off is not visible.
Highly glossy normalized BRDFs without a proper HDR-pipeline are problematic is this regard.

Share this post


Link to post
Share on other sites

To answer the OP's original question:
The problem is most likely that you are not tone-mapping your image correctly and everything above 1 is simply clamped.
This lets the highlight appear sharper, because part of the soft fall-off is not visible.
Highly glossy normalized BRDFs without a proper HDR-pipeline are problematic is this regard.


I see, that would make sense I guess.
Thanks for clearing that up.
So in that way should I even be using normalized BRDF's without HDR Lighting ?

Share this post


Link to post
Share on other sites
You can, but it's janky, especially if you're (ugh) splitting specular off into a different shader. Really, there's no reason why you should use LDR lighting anyways in this day and age-- there are scads of benefits and most of the performance concerns are simply no longer relevant.

EDIT: And the hardware capability stuff, too. Blending was a problem on old hardware, but it's pretty ubiquitous now.

Share this post


Link to post
Share on other sites
Well the reason is I don't quite get (yet) how to implement it. tongue.png (the part where you calculate luminance)
It's definitely on my list though ;)

Share this post


Link to post
Share on other sites
Render to R11G11B10F backbuffer, slam bang done. Fancy bits like eye adaptation, bloom and tonemapping are really just icing on the cake. If you want to go really deep into the HDR rabbit hole, you can get into the mechanics of radiance, irradiance and flux and start using real units for lighting data-- though that's complex enough to give even seasoned professionals the willies. On the upside, it's guaranteed to look realistic smile.png

EDIT: Tri-Ace has actually 'shipped' a tech demo doing exactly this. In my not-quite professional opinion, it's spiffy as hell.

Share this post


Link to post
Share on other sites
Well I'm already rendering my light accumulation (Light Pre Pass) into a RGBA64 Buffer. I tried an FP16 format but I'm getting some horrible black artifacts when doing that. (Using XNA 4.0 btw)

Share this post


Link to post
Share on other sites
I tried an FP16 format but I'm getting some horrible black artifacts when doing that.
That usually means that you're generating NaN's in your lighting/shading code.
Regular integer buffers will convert these to 0, but FP buffers will keep them as NaN. If you do any post-processing, these NaN's will spread.

Share this post


Link to post
Share on other sites

Well, this strange Kd-renormalization business is something to reconsider.

You’re absolutely right. Sorry, I confused things a little. The stuff I wrote before only applies to Monte Carlo based approaches. If you reflect always, you just multiply with the diffuse reflection coefficient Kd (that’s all you have to do in your case).

If you reflect based on Russian Roulette (as done in Photon Mapping, Path Tracing etc) you only reflect with a certain probability, e.g. the mean reflection coefficient (Kd.r+Kd.g+Kd.b)/3. In this case you have to divide the outgoing flux (in photon mapping) / radiance (in path tracing) by the probability (as usual with Monte Carlo integration).

Sorry I mixed that up. wacko.png

One final small example:
Consider a photon coming in with the flux (1,1,1). The diffuse reflection coefficients are (1,0,0) (red wall). The probability for diffuse reflection is (1+0+0)/3 = 1/3. If you reflect always, you’d emit three photons with (1,0,0). If only each third photon is chosen for diffuse reflection its flux is divided by the probability, i.e. it becomes three times brighter (3,0,0). The code I copied in my previous post came from a photon mapper.

Share this post


Link to post
Share on other sites

[quote name='lipsryme' timestamp='1332464351' post='4924480']I tried an FP16 format but I'm getting some horrible black artifacts when doing that.
That usually means that you're generating NaN's in your lighting/shading code.
Regular integer buffers will convert these to 0, but FP buffers will keep them as NaN. If you do any post-processing, these NaN's will spread.
[/quote]

This. If you're willing, we can look over your shader code and see if we can figure out where this is happening.

Share this post


Link to post
Share on other sites
Update: Ah stupid me forgot to saturate the N dot L smile.png
Doing that fixed those artifacts for me.


Not sure if it's a problem with XNA but using HalfVector4 Format it says "Doesn't support alpha blending or color write channel"
Using something called "HdrBlendable" it works but like I said those artifacts appear.

Light Shader looking like this:
//Vertex Shader
PSI_Directional Directional_VS(VSI_Directional input)
{
//Initialize Output
PSI_Directional output = (PSI_Directional)0;
//Just Straight Pass Position
output.Position = float4(input.Position.xyz, 1);
// output viewPosition for viewRay
float4 viewPosition = mul(float4(input.Position.xy, 1, 1), inverseProjection);
output.viewRay = viewPosition.xyz;
//Pass UV too
output.UV = input.UV + GBufferTextureSize;
//Return
return output;
}




PSO_Lighting BlinnPhong_DirectionalLight(float3 Position, float3 L, float3 N, float2 UV)
{
PSO_Lighting output = (PSO_Lighting)0;

// Transform LightDirection to View Space
L = normalize(mul(normalize(L), View));
//Calculate N.L
float NL = dot(N, -L);
//Calculate Diffuse
float3 Diffuse = LightColor.xyz * LightIntensity;
Diffuse = ToLinear(Diffuse);
// Retrieve Specular Power (glossiness)
float glossiness = exp(tex2D(SpecularBuffer, UV).a * 20) / 10.5f;
// Normalized View Direction
float3 V = normalize(normalize(mul(CameraPosition, View)) - normalize(Position));
// Calculate Half-Vector
float3 H = normalize(V - L);

// Calculate Normalization Factor for Energy Conserving BRDF
float NormalizationFactor = (glossiness + 8) / (8 * PI);
// Calculate N dot H
float NH = NormalizationFactor * pow(saturate(dot(N, H)), glossiness);


output.Lighting = float4(NL * Diffuse.r,
NL * Diffuse.g,
NL * Diffuse.b,
NL * NH);


return output;
}




//Pixel Shader
PSO_Lighting Directional_PS(PSI_Directional input)
{
PSO_Lighting output = (PSO_Lighting)0;
if(isLighting)
{
// Receive Normal Data
float4 NormalData = tex2D(NormalBuffer, input.UV);
float3 normal = normalize(decode(NormalData));
// Get Depth and calculate View-Space Position by multiplying
// the result with the viewRay Vector
float3 viewRay = normalize(input.viewRay);
float Depth = tex2D(DepthBuffer, input.UV).r;
float3 PositionVS = Depth * viewRay;



// Calculate Phong Shader
output = BlinnPhong_DirectionalLight(PositionVS, LightDir, normal, input.UV);


}
else
{
output.Lighting = 0.0f;
}


return output;
}


Btw I changed from rendering specular to a seperate Buffer, to just rendering it in the alpha channel and then multiplying the color from my specularmap with it later in the 2. geometry pass with the N.L * N.H term. Is there any difference/wrong with this?

Also does it make sense to use tone mapping without having any kind of luminance of exposure control ?

Share this post


Link to post
Share on other sites
Yes, definitely. You can set the exposure value manually if you want-- many games do just this! (CoD:BlOps and possibly MW3 in particular)
EDIT: I'm pretty sure Source/Half-Life 2 lets you goof with exposure as well, though they do have a wonky fake-exposure trick due to their clever-as-hell in-shader tonemapping operation.

Also, HDRBlendable is (I think) the FP10 format. It should render slightly faster since you write half as much (assuming you're fill-bound, which isn't all that uncommon in high-overdraw scenarios with cheap shaders-- much like a light prepass renderer) as a traditional FP16. You can also experiment with fixed-point encoding schemes if you're willing to give up multipass rendering, though I don't think that's actually an option for you.

Share this post


Link to post
Share on other sites

Btw I changed from rendering specular to a seperate Buffer, to just rendering it in the alpha channel and then multiplying the color from my specularmap with it later in the 2. geometry pass with the N.L * N.H term. Is there any difference/wrong with this?


If you do this, you'll only have monochrome specular highlights instead of them matching the color of the light. You can try and cheat by multiplying the specular intensity with the color of the diffuse lighting at that pixel, but that still won't be correct.


Also does it make sense to use tone mapping without having any kind of luminance of exposure control ?


It can still make sense to do it since tone mapping will give you a nice mapping of really bright values to brighter colors. If you don't tone map you'll just clamp at 1, which doesn't produce great results. However you'll be effectively stuck at 1 exposure value, which won't allow you to adapt to drastically different lighting conditions. To do that you would either need to set exposure manually (like InvalidPointer) susggests, or implement some sort of auto exposure routine.


Also, HDRBlendable is (I think) the FP10 format. It should render slightly faster since you write half as much (assuming you're fill-bound, which isn't all that uncommon in high-overdraw scenarios with cheap shaders-- much like a light prepass renderer) as a traditional FP16.


It's FP10 on the 360, but FP16 on the PC. However XNA won't let you use filtering that format even if your GPU supports, since it enforces compatibility with the 360. This is also why it won't let you blend HalfVector4.

Share this post


Link to post
Share on other sites
But in a light pre pass renderer I have to render all geometry a second time anyway. So I'd have access to my material ID's and everything again there.
So what am I missing when just rendering out the (N.L * N.H) term and afterwards before putting it together with the lighting multiply it by the specular color defined by the specular map ?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement