• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Samith

cosine term in rendering equation

13 posts in this topic

I've been trying to figure this out for a while now and I can't quite seem to do it. As far as I know, the cosine term should always be present in the rendering equation (RadianceOut = Sum(BRDF(In, Out) * RadianceIn * cos(theta))) because it's necessary to scale the incoming radiance by the projected area along the incoming ray.

 

However, I never see people use a cosine term when they use cube maps for lighting. And in fact when I write a shader that uses a cube map for a mirror-like reflection, using the cosine term makes things look worse (to me!). The edges around my objects seem a bit too dark, and if I add a fresnel term things look way too dark. 

#version 410

uniform samplerCube sys_Envmap;

layout(location = 0) in vec3 in_WorldReflection;
layout(location = 1) in vec3 in_WorldNormal;

out vec4 out_FragColor;

void main()
{
    vec3 worldRefl_n = normalize(in_WorldReflection);
    float n_dot_e = max(0.0, dot(worldRefl_n, normalize(in_WorldNormal));

    vec3 cEnv = texture(sys_Envmap, worldRefl_n).xyz;
   
    float R0 = 0.05f;
    float Fenv = R0 + (1 - R0) * pow(1 - n_dot_e, 5);

 // out_FragColor.xyz = Fenv * cEnv;             // good?
    out_FragColor.xyz = n_dot_e * Fenv * cEnv;   // gah! so dark!
    out_FragColor.a = 1.0;
}

There's some GLSL code, to give this post some context. The commented out line looks "good" to me, but is physically incorrect to my understanding. The uncommented out line is what I think is more "physically correct" but which I think looks worse, since the cosine term makes things so much darker.

 

Am I missing something? Is there a reason why I never see other people multiply their lighting maps by the necessary cosine term?

0

Share this post


Link to post
Share on other sites

Thanks for the time, guys! I think we're narrowing in on what I don't understand here.

 

So, I understand the idea behind filtering cubemaps ahead of time (and I do this already!) but in the ideal mirror case I'm not looking at anything but my top mip, and the fact that I'm treating my cubemap texel as an infinitely distant directional light is intended. Both of you mentioned that I need a proper BRDF, so I think this is where I'm really going wrong. In my sample above I'm using the implicit BRDF f(l, v) = { 1 : l = reflect(v, n); 0 : otherwise }. It is very easy to integrate your lighting when you use that BRDF! This BRDF is also perfectly energy conserving, I think! And this leads to my confusion: if the above BRDF already conserves all energy, then how could my grazing reflections possibly be any brighter? So I end up feeling like my problem is that they just aren't getting enough irradiance from the grazing light direction (because of the cosine term). 

 

Is that not a valid BRDF for an ideal mirror? I can see how it might not be. I guess technically an ideal mirror BRDF would be a delta function when the light dir equals the view dir, which would integrate to 1, but I'm not sure that's sufficiently different than what I have.

 

Also, as MJP pointed out, I have noticed that the Cook-Torrance model has a cos(theta) in the denominator, but didn't think the microfacet stuff would be applicable in an ideal mirror case. And the Beckmann distribution approaches infinity as the roughness approaches zero, so I felt like an ideal mirror sort of breaks Cook-Torrance.

 

Anyway, tl;dr: I'm really heavily concerning myself with the ideal mirror case, and then comparing my results to things that look really shiny, which might not be a fair test. Things that I see in real life that look like ideal mirrors are probably not ideal, and best modeled by Cook-Torrance or something with a low (but non-zero) roughness. I just want to make sure my math is right for the real true ideal mirror case, though. So I guess I'm mostly interested in knowing 1) if the above BRDF (f(l,v) = { 1 : l = reflect(v, n); 0 : otherwise }) is a valid BRDF for an ideal mirror, and 2) if that is an "ideal mirror" BRDF, then should the cosine term still belongs in the equation and should one expect the surface to get darker at glancing angles?

 

I suspect the answers are yes and yes?

Edited by Samith
1

Share this post


Link to post
Share on other sites

Microfacet BRDFs are indeed unnecessary in case of an optically smooth surface (i.e. there are no microscopic bumps on the surface), which you are trying to replicate. Your BRDF is valid, although physically nonsensical, because there are no "100% reflectors" in real life (i.e. all materials will absorb some energy and heat up a little. You can indeed produce optically smooth surfaces though, or at least so smooth that there is no visible difference). 

 

I think your question is interesting. It seemed trivial to me at first but after thinking about it I think I need to get out some radiometry books again. I think the question, in a physical sense, being asked here is "if not all 100% of the incoming radiance from the cube map is redirected to the viewer, where does the rest go?". Obviously the remaining percentage that is masked away by the cos(...) isn't reflected in some other direction. In case of perfection reflection, all radiance incoming at a specific angle is projected towards a specific viewing direction, so all of it should arrive there. On the other hand the cos(...) is still sensible in a physical sense, since incident rays can still hit the surface at angles which are not parallel to the surface normal, so that radiance should decrease proportional to the cosine. Or is it? I have no answer for the question right now.

Edited by agleed
0

Share this post


Link to post
Share on other sites

That is an interesting question actually. The extreme case of an ideal mirror should be easy to answer but I realised I wasn't sure either...

 

I haven't pulled out my text books yet, but just doing a quick MSPaint doodle first happy.png ... the cosine term here is because (images below) in the top image, the orange 'ray' of light is spread over a wider area when it hits the surface at a shallower angle. The shallower the angle, the wider the area - we're fine with this.

dot(N, L), where N is tangent to the black line and L is outwards along any orange line, is the term we use here.

yDH7V6q.png

 

But shouldn't we also have some kind of cosine term when evaluating the viewer? Say that the viewer is looking at this surface flat on (i.e. dot(N,V)==1.0). That would make the viewer rays be the thin grey lines in the image.

In the top image, the light rays are spread out over a large area, but the view rays are much more concentrated.

 

Or alternatively, let's pretend that orange rays are view rays, and thin-grey are light rays. In the top scenario, the pixel that the viewer evaluates covers a very large area, whereas the pixel that the viewer evaluates for the bottom scenario covers a smaller area.

Should we be incorporating the projected-area of the pixel with respect to the viewer, as well as the projected-area with respect to each light? What would this term be? Would you divide the results by dot(N,V)?

2

Share this post


Link to post
Share on other sites

But shouldn't we also have some kind of cosine term when evaluating the viewer? Say that the viewer is looking at this surface flat on (i.e. dot(N,V)==1.0). That would make the viewer rays be the thin grey lines in the image.

In the top image, the light rays are spread out over a large area, but the view rays are much more concentrated.

 

 

Yes, I think that's right. At larger angles, the part of your eye where the energy hits (for us, the pixel) sees a larger area of the surface. We have to account for this. So when angle between normal and view = larger, received radiance should be larger. I agree that this can be achieved with 1/cos(n,v). In case of a perfect reflection, cos(n,v) == cos(n,l) so it cancels out exactly the spread of energy resulting from larger incoming light angles. So if Samith picks a brdf of f(..) = 1/cos(n,v), it solves his problem. 

 

Now that I think about it, this is part of the geometry term you see in the area form of the rendering equation.

 

uvo36Ms.png

 

There you have two cos(...) terms that are used for scaling. There's no division by cos in there, but that's just because the angle is defined as the incoming angle onto the surface which receives the lighting indirectly, i.e. which in our case would be the angle between viewing direction and normal of the screen pixel where light arrives at. 

Edited by agleed
1

Share this post


Link to post
Share on other sites

Should we be incorporating the projected-area of the pixel with respect to the viewer, as well as the projected-area with respect to each light? What would this term be? Would you divide the results by dot(N,V)?

 

I went down this path at first, too, but couldn't get the math to work. But, I just tried again, and I think this time my math works (and makes sense):

let N = surface normal, V = view normal, I = incident light normal

L_r = reflected radiance
L_i = incident radiance

(N . V) * dA is the projected area underneath the pixel

L_r = Power_r / ((N . V) * dA * dW)
L_i = (Power_i / (dA * dW)) * (N . I)

// About L_i: Power_i / (dA * dW) is the radiance stored in the cube map, 
// (N . I) is the cosine term in the rendering equation

Power_r = L_r * (N . V) * dA * dW
Power_i = L_i * dA * dW / (N . I)
Power_r = Power_i   // all incident light is reflected in the reflection dir
// i THINK this above is the important step. In a mirror case, all energy is reflected
// in the reflection direction. In a diffuse case, I think it would be more like
// Power_r = Power_i * (N . V) / pi, since less energy would exit through the
// smaller projected areas at oblique angles

therefore: 
L_r = Power_i / ((N . V) * dA * dW) 
    = (L_i * dA * dW) / (N . I) / ((N . V) * dA * dW)
    = L_i * (N . I) / (N . V) = L_i, since (N. I) == (N . V)

I've spent way too much time thinking about this! Hopefully the math/comments above are correct. They seem correct to me, and they justify the 1 / (N . V) factor in the mirror case.

Edited by Samith
0

Share this post


Link to post
Share on other sites

I'm trying to reconcile that with the ideal diffuser (Lambert) case now... The lambert BRDF is just "k" (diffuse colour), so for a white surface we typically just use dot(N,L) in our per pixel calculations.

If we incorporate the view angle through, we get dot(N,L)/dot(N,V)... which results in a very flat and unrealistic looking surface.

0

Share this post


Link to post
Share on other sites

Samith, in the ideal mirror case, I am certain that you had the right idea when you said it should be a delta function. That is the only way that your BRDF will integrate to 1. I suppose in code, this can only be approximated by having your BRDF equal 1/?? where ?? is whatever step size you're using in your integration.

0

Share this post


Link to post
Share on other sites

Samith, in the ideal mirror case, I am certain that you had the right idea when you said it should be a delta function. That is the only way that your BRDF will integrate to 1. I suppose in code, this can only be approximated by having your BRDF equal 1/?? where ?? is whatever step size you're using in your integration.

 

Yep, the BRDF is a delta function (in two dimensions), usually though in code it is handled specially, i.e. the (unique) reflected ray is calculated analytically rather than integrating the BRDF (sampling techniques tend to break down when confronted with a delta distribution) or, alternatively, define "ideal mirror" == "extremely shiny surface" so that it's not quite a delta function but is close enough, without requiring special handling. Depends on your needs. But yeah you don't sample a BRDF at uniform intervals in general, it's too expensive.

 

As for the cosine term confusion, remember that the BRDF essentially says "hey, such and such amount of energy is falling on my differential surface from direction L, how much of that is reflected into direction V?" and so (thinking of it as a function) converts irradiance from L to radiance into V. Now you're not given the irradiance from L, you're only given radiance from L. But you know the angle L makes with the surface normal (theta) and irradiance is equal to radiance multiplied by the cosine of the angle (e.g. grazing angle = zero irradiance, no matter how much energy is being beamed parallel to the surface, and normal incidence = maximum irradiance). That's what the cosine term in the rendering equation is doing! The Li * cos(theta) term is actually your irradiance from L, you multiply this by the BRDF to obtain radiance into V, and integrating this over the unit sphere or hemisphere gives you the total radiance into V taking into account every light source or reflector in the world. By reciprocity you can also do it backwards, etc...

 

So you only need a single cosine term, because that's how the BRDF is defined, now since the BRDF converts irradiance into radiance it probably has to take into account dot(N, V), and many do in some way, e.g. Cook-Torrance, by dividing by dot(N, V) since radiance is defined as watts/square meter/steradian, so the smaller the solid angle into which the light is emitted (the smaller dot(N, V)) the larger the radiance becomes (the energy is "concentrated" into a small solid angle). But the ideal diffuser doesn't have to, because that's what an ideal diffuser is: it reflects with constant radiance, so its BRDF should be constant... for radiance to be constant. The viewer's position is irrelevant.

 

When the light finally hits the sensor, you don't want to use irradiance as a final "pixel color" or anything, because, of course, irradiance is the amount of energy falling onto a differential cell of the sensor, and this amount of energy is proportional to the incident angle of the light falling onto that cell. So you actually want to use radiance, and so there is no need to consider the angle made by the light with the surface normal of the sensor or screen or whatever. Yeah, it's quite confusing with all the different terms and units being thrown around (sometimes in conflicting ways by different people) and there are various interpretations that are equally valid but seem quite different, but I think it makes sense.

1

Share this post


Link to post
Share on other sites


So you only need a single cosine term, because that's how the BRDF is defined, now since the BRDF converts irradiance into radiance it probably has to take into account dot(N, V), and many do in some way, e.g. Cook-Torrance, by dividing by dot(N, V) since radiance is defined as watts/square meter/steradian, so the smaller the solid angle into which the light is emitted (the smaller dot(N, V)) the larger the radiance becomes (the energy is "concentrated" into a small solid angle). But the ideal diffuser doesn't have to, because that's what an ideal diffuser is: it reflects with constant radiance, so its BRDF should be constant... for radiance to be constant. The viewer's position is irrelevant.

 

Yeah. I think what's confusing about the BRDF is that a BRDF of 1 steradian^-1 means that the RADIANCE is distributed equally in all directions. The radiant intensity (watts / steradian) is not even in all directions, but the smaller projected area at angles cancels out the change in radiant intensity and the radiance stays the same. In the ideal mirror case, the radiant intensity doesn't go down at angles, it stays the same, and you need the BRDF to reflect that with the 1 / dot(N, V) term.

1

Share this post


Link to post
Share on other sites

Hi,

 

This paper addresses the problem of dark reflections at grazing angles for metals and suggests an improved BRDF to fix it:

 

http://sirkan.iit.bme.hu/~szirmay/brdf6.pdf

 

The correction term mentioned in that paper allows the simulation of ideal mirrors and in general boosts the specular reflections of all metallic surfaces. It is trivial to add it to your pipeline (either to shaders or pre-convolved cubemaps) so give it a try!

0

Share this post


Link to post
Share on other sites
I'm trying to reconcile that with the ideal diffuser (Lambert) case now... The lambert BRDF is just "k" (diffuse colour), so for a white surface we typically just use dot(N,L) in our per pixel calculations.
If we incorporate the view angle through, we get dot(N,L)/dot(N,V)... which results in a very flat and unrealistic looking surface.

There are actually three cosine terms: dot(incident light dir, surface normal), dot(emitted light dir, normal), dot(viewer dir, normal) -> the last two cancel out (the differential area over which the emitted light is distributed grows proportionally to the observed differential area).

2

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0