• Create Account

Awesome job so far everyone! Please give us your feedback on how our article efforts are going. We still need more finished articles for our May contest theme: Remake the Classics

# Behavior of energy conserving BRDF

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

23 replies to this topic

### #1lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 21 March 2012 - 05:09 PM

I was implementing the normalization factor into my Blinn-Phong BRDF but am sceptical of it's results.
Using the normal BRDF without the normalization factor the result on a flat cube surface looks like this:
http://cl.ly/0j1z1w1W451i2Z0B0x2K

Now using the NormalizationFactor it becomes this:
http://cl.ly/0J2A1Q2I1G1y2J3B2d3a

Using this Code:
```// Calculate Normalization Factor for Energy Conserving BRDF
float NormalizationFactor = (material.SpecularPower + 8) / (8 * PI);

// Calculate N dot H
float NH = NormalizationFactor * pow(saturate(dot(N, H)), material.SpecularPower);
```

So my specular reflection has gotten quite a lot bigger plus it seems to have lost its attenuation somehow.
Is that the correct result of using an energy conservative BRDF ?

Also by using this factor the term "Specular Intensity" as a material property becomes unnecessary I presume ?

Student at the Games-Academy Frankfurt, Germany.

### #2Tsus  Members   -  Reputation: 790

Like
1Likes
Like

Posted 21 March 2012 - 07:02 PM

Hi,

Your normalization looks good, but you have 100% specular light, right now.
If you mix a little more diffuse to it, it should be fine.

You see, normalized Blinn-Phong is:
`fr = Kd * saturate(dot(N,L))/pi + Ks * (n+8)/(8Pi) * pow(saturate(dot(N,H)), n)`

Usually you want Kd+Ks<=1, since both integrals over the hemisphere yield 1:
Diffuse:

PS: Perhaps you'd like to check out Ashikhmin-Shirley and Cook-Torrance.

Acagamics e.V. – IGDA Student Game Development Club (University of Magdeburg, Germany)

### #3lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 21 March 2012 - 07:18 PM

Ok I'm already dividing my Diffuse Light by PI before adding it to the specular.
The screens above were just the specular term. The complete product looks something like this (with energy conservation):
http://cl.ly/2q0A1s3w1I2n1S3X3133

How would I go about making sure that the sum does not go over 1 ? Do I need to saturate the result or is that already done by the normalization stuff ?

Student at the Games-Academy Frankfurt, Germany.

### #4Tsus  Members   -  Reputation: 790

Like
1Likes
Like

Posted 21 March 2012 - 07:38 PM

Ok I'm already dividing my Diffuse Light by PI before adding it to the specular.

Yeah, I figured that.
(Then it was just for completeness for the curious reader. )

How would I go about making sure that the sum does not go over 1 ? Do I need to saturate the result or is that already done by the normalization stuff ?

Kd and Ks are both material parameters and control how much diffuse and specular to add.
fr = Kd * diffuse + Ks * specular.
Diffuse and specular are normalized independently to 1. If you make sure that Kd + Ks <= 1 then everything is fine, since fr<=1 holds as well.

Acagamics e.V. – IGDA Student Game Development Club (University of Magdeburg, Germany)

### #5lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 21 March 2012 - 07:49 PM

So Kd and Ks would actually be the Diffuse and Specular Color ?
Or is that just a float you'd insert in there to control the ratio from the application?

Student at the Games-Academy Frankfurt, Germany.

### #6Tsus  Members   -  Reputation: 790

Like
1Likes
Like

Posted 21 March 2012 - 08:22 PM

Actually it’s both. For one thing it’s a ratio that scales the terms so that the sum is smaller than one (often diffuse and specular map summed up are bigger than one.)
They can also be colors, if you think of it as a weighted diffuse map / weighted specular map, but then it gets a little tricky.

Assume you have a white (incoming) light (1,1,1) and your wall is perfectly diffuse red. To maintain the energy, your wall must actually reflect (3,0,0), not (1,0,0).
So, what you basically do is:

In practice this gives you much more colorful light (after tonemapping). Sometimes it’s too colorful. So, at times people just lerp the corrected color with the non-corrected to lessen the effect.
It's up to you whether you do this correction.

Acagamics e.V. – IGDA Student Game Development Club (University of Magdeburg, Germany)

### #7lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 21 March 2012 - 08:44 PM

mmmh that math is a little confusing to me
How would that translate to hlsl code what you are doing there?

So I understand your example that it has to reflect (3, 0, 0) but what is it you're actually computing there?
Update: But wait wouldn't the resulting vector of (1, 1, 1) and (1, 0, 0) not be (2, 1, 1) instead of (3, 0, 0) ?
That first line is basically getting the average of p (color of the pixel/material?), right ?
So dividing that color by its average and multiplying it by the lightColor is the solution ? But the result would still be (3, 0, 0) or not ? But the idea was to keep it between 0-1 or not ?

Student at the Games-Academy Frankfurt, Germany.

### #8Tsus  Members   -  Reputation: 790

Like
0Likes
Like

Posted 21 March 2012 - 09:21 PM

Okay, let’s stay with that wall sample.

Let’s say we have white light coming in (1,1,1). So actually we can think of it as three photons (1xred, 1xgreen, 1xblue). So the flux (energy) coming in here is actually 3. This means, our output should better be three as well.

If there is a red diffuse wall and it says, that all the incoming energy is turned red, then the walls output ratio is (1,0,0). If you’d have a yellow wall, it would be (0.5, 0.5, 0), see? The components of the ratio sum up to 1. Multiplied with dot(N,L)/Pi we still stay <1.

With the little formula above there, I took the incoming light () and distributed it according to the output ratio.
Let me see, if I can dig out some old code (well, I found CUDA code, so I don’t guarantee for HLSL syntax correctness ).
Since so to say three photons came in (1xred, 1xgreen, 1xblue), we threw three photons out (3xred to be more precisely).
```// Weighted diffuse map.
float3 Kd = texture2D(ColorSampler, input.TexCoord).rgb * diffuseRatio;  //  diffuseRatio is a material parameter

// compute diffuse color
float3 diffuse = Kd * saturate(dot(N,L)) / PI;

// preserve energy (optional)
diffuse *= 3.0f / (Kd.r + Kd.g + Kd.b);

// same for specular...
result = diffuse + specular;```

I hope that clears this a little up.

Small sample:
Kd * Light / ((Kd.r+Kd.g+Kd.b) / 3)
= (1,0,0) * (1,1,1) / ( (1+0+0) / 3 )
= (1,0,0) * (1,1,1) * 3 = (3,0,0). Works.

Acagamics e.V. – IGDA Student Game Development Club (University of Magdeburg, Germany)

### #9lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 22 March 2012 - 11:40 AM

In your example you say ((Kd.r + Kd.g + Kd.b) / 3) but in your code it's the other way around (3 / (Kd.r + Kd.g + Kd.b)).
Which one is right ?

Anyway thanks a lot for explaining it all

Student at the Games-Academy Frankfurt, Germany.

### #10Tsus  Members   -  Reputation: 790

Like
0Likes
Like

Posted 22 March 2012 - 12:17 PM

In your example you say ((Kd.r + Kd.g + Kd.b) / 3) but in your code it's the other way around (3 / (Kd.r + Kd.g + Kd.b)).
Which one is right ?

If I’m not mistaken it’s the same.
In the code I multiply with 3 / (Kd.r + Kd.g + Kd.b).
In the equation I divide by (Kd.r + Kd.g + Kd.b) / 3, which is the same as multiplying with the reciprocal (as I've done in the code).

Sorry, for writing it so confusing in the first place.

Acagamics e.V. – IGDA Student Game Development Club (University of Magdeburg, Germany)

### #11macnihilist  Members   -  Reputation: 359

Like
0Likes
Like

Posted 22 March 2012 - 12:30 PM

Well, this strange Kd-renormalization business is something to reconsider. Someone should tell you.

Maybe I'm missing something, but for me it doesn't work out.
Let's say you have (1,1,1), and let's say it's white. Then the renormalization factor is 1, end result (1,1,1). Ok.
Let's say you have (.1,.1,.1), a dark gray. Then the factor is 10, end result (1,1,1) again. So the dark gray turned into white. Probably not what you wanted.

It is also implausible from a physical point of view.
"Three photons come in, three have to go out"? Why? It's perfectly valid for a surface to absorb photons at certain energies, that's why most colored things are colored.
(Let's stick to the photon picture, also it is maybe not ideal in this case.)
With your logic you are converting 2 "photons" of a certain energy into photons of another (quite different) energy just so three come out in the end.
If this effect is strong enough to significantly change the color (energy) of photons it is called fluorescence (or, with time-delay phosphorescence).
This is not something that happens for normal materials to an extent that would be relevant for image generation.

To answer the OP's original question:
The problem is most likely that you are not tone-mapping your image correctly and everything above 1 is simply clamped.
This lets the highlight appear sharper, because part of the soft fall-off is not visible.
Highly glossy normalized BRDFs without a proper HDR-pipeline are problematic is this regard.

### #12lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 22 March 2012 - 12:36 PM

To answer the OP's original question:
The problem is most likely that you are not tone-mapping your image correctly and everything above 1 is simply clamped.
This lets the highlight appear sharper, because part of the soft fall-off is not visible.
Highly glossy normalized BRDFs without a proper HDR-pipeline are problematic is this regard.

I see, that would make sense I guess.
Thanks for clearing that up.
So in that way should I even be using normalized BRDF's without HDR Lighting ?

Student at the Games-Academy Frankfurt, Germany.

### #13InvalidPointer  Members   -  Reputation: 923

Like
0Likes
Like

Posted 22 March 2012 - 01:13 PM

You can, but it's janky, especially if you're (ugh) splitting specular off into a different shader. Really, there's no reason why you should use LDR lighting anyways in this day and age-- there are scads of benefits and most of the performance concerns are simply no longer relevant.

EDIT: And the hardware capability stuff, too. Blending was a problem on old hardware, but it's pretty ubiquitous now.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

### #14lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 22 March 2012 - 01:19 PM

Well the reason is I don't quite get (yet) how to implement it. (the part where you calculate luminance)
It's definitely on my list though ;)

Student at the Games-Academy Frankfurt, Germany.

### #15InvalidPointer  Members   -  Reputation: 923

Like
0Likes
Like

Posted 22 March 2012 - 06:50 PM

Render to R11G11B10F backbuffer, slam bang done. Fancy bits like eye adaptation, bloom and tonemapping are really just icing on the cake. If you want to go really deep into the HDR rabbit hole, you can get into the mechanics of radiance, irradiance and flux and start using real units for lighting data-- though that's complex enough to give even seasoned professionals the willies. On the upside, it's guaranteed to look realistic

EDIT: Tri-Ace has actually 'shipped' a tech demo doing exactly this. In my not-quite professional opinion, it's spiffy as hell.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

### #16lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 22 March 2012 - 06:59 PM

Well I'm already rendering my light accumulation (Light Pre Pass) into a RGBA64 Buffer. I tried an FP16 format but I'm getting some horrible black artifacts when doing that. (Using XNA 4.0 btw)

Student at the Games-Academy Frankfurt, Germany.

### #17Hodgman  Moderators   -  Reputation: 13599

Like
0Likes
Like

Posted 22 March 2012 - 08:38 PM

I tried an FP16 format but I'm getting some horrible black artifacts when doing that.

Regular integer buffers will convert these to 0, but FP buffers will keep them as NaN. If you do any post-processing, these NaN's will spread.

### #18Tsus  Members   -  Reputation: 790

Like
0Likes
Like

Posted 23 March 2012 - 06:31 AM

Well, this strange Kd-renormalization business is something to reconsider.

You’re absolutely right. Sorry, I confused things a little. The stuff I wrote before only applies to Monte Carlo based approaches. If you reflect always, you just multiply with the diffuse reflection coefficient Kd (that’s all you have to do in your case).

If you reflect based on Russian Roulette (as done in Photon Mapping, Path Tracing etc) you only reflect with a certain probability, e.g. the mean reflection coefficient (Kd.r+Kd.g+Kd.b)/3. In this case you have to divide the outgoing flux (in photon mapping) / radiance (in path tracing) by the probability (as usual with Monte Carlo integration).

Sorry I mixed that up.

One final small example:
Consider a photon coming in with the flux (1,1,1). The diffuse reflection coefficients are (1,0,0) (red wall). The probability for diffuse reflection is (1+0+0)/3 = 1/3. If you reflect always, you’d emit three photons with (1,0,0). If only each third photon is chosen for diffuse reflection its flux is divided by the probability, i.e. it becomes three times brighter (3,0,0). The code I copied in my previous post came from a photon mapper.

Acagamics e.V. – IGDA Student Game Development Club (University of Magdeburg, Germany)

### #19InvalidPointer  Members   -  Reputation: 923

Like
0Likes
Like

Posted 23 March 2012 - 08:37 AM

I tried an FP16 format but I'm getting some horrible black artifacts when doing that.

Regular integer buffers will convert these to 0, but FP buffers will keep them as NaN. If you do any post-processing, these NaN's will spread.

This. If you're willing, we can look over your shader code and see if we can figure out where this is happening.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

### #20lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 23 March 2012 - 10:06 AM

Update: Ah stupid me forgot to saturate the N dot L
Doing that fixed those artifacts for me.

Not sure if it's a problem with XNA but using HalfVector4 Format it says "Doesn't support alpha blending or color write channel"
Using something called "HdrBlendable" it works but like I said those artifacts appear.

```//Vertex Shader
PSI_Directional Directional_VS(VSI_Directional input)
{
//Initialize Output
PSI_Directional output = (PSI_Directional)0;
//Just Straight Pass Position
output.Position = float4(input.Position.xyz, 1);
// output viewPosition for viewRay
float4 viewPosition = mul(float4(input.Position.xy, 1, 1), inverseProjection);
output.viewRay = viewPosition.xyz;
//Pass UV too
output.UV = input.UV + GBufferTextureSize;
//Return
return output;
}

PSO_Lighting BlinnPhong_DirectionalLight(float3 Position, float3 L, float3 N, float2 UV)
{
PSO_Lighting output = (PSO_Lighting)0;

// Transform LightDirection to View Space
L = normalize(mul(normalize(L), View));
//Calculate N.L
float NL = dot(N, -L);
//Calculate Diffuse
float3 Diffuse = LightColor.xyz * LightIntensity;
Diffuse = ToLinear(Diffuse);
// Retrieve Specular Power (glossiness)
float glossiness = exp(tex2D(SpecularBuffer, UV).a * 20) / 10.5f;
// Normalized View Direction
float3 V = normalize(normalize(mul(CameraPosition, View)) - normalize(Position));
// Calculate Half-Vector
float3 H = normalize(V - L);

// Calculate Normalization Factor for Energy Conserving BRDF
float NormalizationFactor = (glossiness + 8) / (8 * PI);
// Calculate N dot H
float NH = NormalizationFactor * pow(saturate(dot(N, H)), glossiness);

output.Lighting = float4(NL * Diffuse.r,
NL * Diffuse.g,
NL * Diffuse.b,
NL * NH);

return output;
}

PSO_Lighting Directional_PS(PSI_Directional input)
{
PSO_Lighting output = (PSO_Lighting)0;
if(isLighting)
{
float4 NormalData = tex2D(NormalBuffer, input.UV);
float3 normal = normalize(decode(NormalData));
// Get Depth and calculate View-Space Position by multiplying
// the result with the viewRay Vector
float3 viewRay = normalize(input.viewRay);
float Depth = tex2D(DepthBuffer, input.UV).r;
float3 PositionVS = Depth * viewRay;

output = BlinnPhong_DirectionalLight(PositionVS, LightDir, normal, input.UV);

}
else
{
output.Lighting = 0.0f;
}

return output;
}```

Btw I changed from rendering specular to a seperate Buffer, to just rendering it in the alpha channel and then multiplying the color from my specularmap with it later in the 2. geometry pass with the N.L * N.H term. Is there any difference/wrong with this?

Also does it make sense to use tone mapping without having any kind of luminance of exposure control ?

Student at the Games-Academy Frankfurt, Germany.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS