Confused about BRDF implementation

Started by
8 comments, last by InvalidPointer 11 years, 10 months ago
Noticed there is another recent BRDF thread but felt my post did not exactly belong there. So here goes...

I've been thinking of implementing some BRDFs using GLSL these days to be able to simulate the looks of certain materials using a variety of well-known BRDFs. I took my time to read about the subject to get the basics and while I definitely have a much better understanding of it now I still feel like I'm standing on shaky ground. So some help/clarification would be nice.

This is how I think of it, the BRDF function describes the ratio of reflected light to incident light (I'll skip describing this in radiometric terms for simplicity) and one might generally use a combination of two BRDF's together; diffuse BRDF for some diffuse reflection and specular BRDF for some specular reflection. So overall one might have this kind of reflection model written in the fragment/pixel shader (omitting the ambient term):

I[sub]r[/sub] = I[sub]i[/sub]*(brdf[sub]diff[/sub]*dot(n*l)+c[sub]spec[/sub]*brdf[sub]specular[/sub])

where I[sub]r[/sub] is reflected light, I[sub]i[/sub] is the light intensity, c[sub]spec[/sub] is a material property describing the amount of specular reflection from the surface and dot(n*l) is cos for the angle between the surface normal and light direction. I'll focus on the diffuse part here (but my confusion extends to the specular part as well)

So as commonly known, for a BRDF to be at least physically plausible it has a few constraints, among them being conservation of energy. To adhere to the conservation of energy then from what I've understood a normalization factor is often added to the BRDF.
For example a diffuse BRDF that does not conserve energy might just be brdf[sub]diff[/sub] = c[sub]d[/sub] where c[sub]d[/sub] is the material property describing the amount of diffuse reflection (albedo/surface color). A diffuse BRDF that does fullfill the constraint would be brdf[sub]diff[/sub] = c[sub]d[/sub]/pi.

Wanting to make my reflection model at least slightly more physically based I simply performed this in my shader by dividing my diffuse reflection with pi. Of course this gave me nothing but terrible results because the resulting diffuse reflection (for all three color channels) was enough small that my object was quite dark (and sometimes non-visible/black). Obviously the same problem happened when I tried the same for a specular BRDF(normalized Phong and Blinn-Phong), the specular highlights disappeared.

At first I thought that the problem might just lie somewhere in my code but after reading some more about diffuse BRDF I saw some sources showing that when you calculate the reflected light, accounting for light incoming from all directions (integrating over the whole hemisphere), and have a diffuse BRDF which is a constant (albedo) then you'll end up with I[sub]r[/sub] = I[sub]i[/sub]*c[sub]diff[/sub]*pi. This could be problematic in some situations because we could end up producing more light than what we had initially, thus breaking the constraint mentioned earlier.

So in the end, if my whole description above is not wrong (which it could be!), what I'm really confused about is if I SHOULD divide my diffuse BRDF by pi (and thus my problem lies elsewhere) or if I should assume that my diffuse reflection is somehow being implicitly multiplied and divided by pi (cancelling each other out) and thus leaving me with what I already had at first, and then not only making it easier to use but also fulfilling the constraint. Which is the correct approach here? If it's the latter then does the same apply for specular BRDF (assuming things to be multiplied/divided implicitly) as well?

Sorry for the long post, but I felt I needed to write it down myself so I'm sure I understand what might(!) be going on. Hopefully I can get some help on this.
Advertisement
If your object looks too dark, perhaps your light source simply isn't bright enough?
That could of course be the problem. I thought the light intensy range between [0, 1] and so I set it simply to white light (i.e I_i = (1.0, 1.0, 1.0)). A brighter light would mean I have to use higher values (which of course results in higher values in the final result) but does one usually exceed the range above?

That could of course be the problem. I thought the light intensy range between [0, 1] and so I set it simply to white light (i.e I_i = (1.0, 1.0, 1.0)). A brighter light would mean I have to use higher values (which of course results in higher values in the final result) but does one usually exceed the range above?

Absolutely. In my Cornell Box tests I routinely use light values in the 50-70 range. Note this causes some aliasing issues at the edge of the lights you will want to fix later on.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”


[quote name='Suen' timestamp='1338344125' post='4944521']
That could of course be the problem. I thought the light intensy range between [0, 1] and so I set it simply to white light (i.e I_i = (1.0, 1.0, 1.0)). A brighter light would mean I have to use higher values (which of course results in higher values in the final result) but does one usually exceed the range above?

Absolutely. In my Cornell Box tests I routinely use light values in the 50-70 range. Note this causes some aliasing issues at the edge of the lights you will want to fix later on.
[/quote]
I didn't even know that it was a normal solution to this. I thought that commonly it had to be strictly clamped to 0-1 and that the only time you went above that range is when you start to play around with HDRR. Which led me to believe that I had to perform HDRR and implement some tone mapping function to avoid my mesh being completely dark (which I'm not sure it would solve?)

So just so I got this right, I just have to multiply my diffuse and specular reflection by a light intensity with high values suitable for my purpose? Wouldn't that defeat the purpose of dividing with pi to begin with (with regard to the diffuse part)?

[quote name='Bacterius' timestamp='1338349517' post='4944539']
[quote name='Suen' timestamp='1338344125' post='4944521']
That could of course be the problem. I thought the light intensy range between [0, 1] and so I set it simply to white light (i.e I_i = (1.0, 1.0, 1.0)). A brighter light would mean I have to use higher values (which of course results in higher values in the final result) but does one usually exceed the range above?

Absolutely. In my Cornell Box tests I routinely use light values in the 50-70 range. Note this causes some aliasing issues at the edge of the lights you will want to fix later on.
[/quote]
I didn't even know that it was a normal solution to this. I thought that commonly it had to be strictly clamped to 0-1 and that the only time you went above that range is when you start to play around with HDRR. Which led me to believe that I had to perform HDRR and implement some tone mapping function to avoid my mesh being completely dark (which I'm not sure it would solve?)

So just so I got this right, I just have to multiply my diffuse and specular reflection by a light intensity with high values suitable for my purpose? Wouldn't that defeat the purpose of dividing with pi to begin with (with regard to the diffuse part)?
[/quote]
Consider that rays that bounce off your diffuse surface are not guaranteed to hit a light - this should make it obvious that the light intensity is not solely dependent on the value you give it, but also on the light source's size. Clearly if your light source is small you will need to go beyond the [0, 1] range in order to get sufficient light power.

Dividing by pi is (as far as I can see) required if you wish to go down the path of physically based ray tracing, i.e. by using physical parameters for all your materials (IOR, spectral absorption, etc...), so that your results are coherent with the units you use. If you don't use any units and just kind of set the lighting values by trial and error, the division by pi doesn't matter (but I would keep it anyway).

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

So there's 2 ways of looking at the "divide by pi" thing. The obvious way is to do the divide in the shader. The other way is to say "all of my light intensity values are already divided by pi", and then you don't need to do anything in the shader. You can actually partially justify some of the older ad-hoc lighting models this way, if you want. However then you have to be careful with the rest of your BRDF's, since you'll need to multiply them by pi in order to get the same results. You could omit this "multiply by pi" for your specular BRDF if you'd like, but then your specular will be too dark relative to your diffuse by a factor of pi and it won't look right.

Personally I like to just leave the divide in the shader, and that way it's easier to directly use BRDF's from academic papers. It also lets you assign a more precise radiometric meaning to your light intensity values, if you care to do so. For instance with a directional light, your intensity value becomes "the irradiance incident on a surface perpendicular to the light source".

Also one last note...you may be doing this already in your shader code, but that equation you posted should actually be like this:

[attachment=9134:CodeCogsEqn.gif]

The (n dot l) portion is an implicit part of applying the BRDF, and should be applied to both the diffuse and specular terms. This is because the BRDF is always in terms of incident irradiance, which means you need to include the cosine term.
Thank you guys. My conclusion to this then is that instead of assuming that my light intensity values are "silently" divided by pi, which interestingly could work for some older lighting models as mentioned (thanks for the info MJP!), I should just divide it by pi (and corresponding normalization factors for the specular BRDFs) in the shader to keep it consistent with a wide range of BRDF models. Otherwise I would have to keep track on what model that would give an acceptable result and not without the diffuse normalization factors which feels like some unnecessary work. So judging by the answers given here division by pi and higher values for my intensity seem to be the way to go.

Right, thanks for the reminder MJP. I forgot that I had to use dot(n, l) for the specular term. It should definitely be there as a part of calculating the incident irradiance for the surface. What's funny is that I had apparently written it already in my code for a non-normalized Blinn-Phong but that was before I decided to take a shot on the whole BRDF subject. The reason for that had nothing to do with my knowledge about BRDF (which was largely forgotten around the time I implemented Blinn-Phong) but because I wanted my specular term at that time to be set to zero if dot(n, l) was zero and it was suggested to do it there in the equation instead of having a conditional statement in the shader. So..I guess I can leave it there as it is :)

Well one more question if it is ok. I was looking through this paper a few days ago and thought it would be a good reference to use for certain materials and to confirm differences between different BRDF models. As can be seen in the paper they have a table for each material where they list corresponding values for parameters such as the values for the diffuse and specular constant and the 'n' value and of course some other extra parameters depending on the model. Now these values are quite small here as well and so I assume I need to increase the intensity of my light here as well. Is there any "best" suggested brightness here or is it more of trial and error until I get somewhat of a similar result?
If you're going down the physically based shading route, you're going to have to implement a HDR tonemapper at some point, which will help you deal with a wider range of light values.
Real world lights have a very large range of brightness levels -- human night vision kicks in when the scene is lit with less than 'one candles worth' (1candela - a scientific unit) of light, and can still see until about one millionth of a candela. Human daytime vision can comfortably deal with levels from 1 candela up to a million candela. It's common for a scene to have lights which differ in luminosity by a factor of 10000x.
HDR: It's Not Just For Shitty Bloom, We Swear™

EDIT: Also important if you ever decide to go the image-based lighting route. If you clamp your reflected light to 1, you're only ever going to darken the surface and clip.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

This topic is closed to new replies.

Advertisement