BRDF & Environment mapping

Recommended Posts

spek    1240
Hi, I made a Cg shader that does BRDF mapping, based on a nVidia technique which is using 2 cubemaps with pre-defined data. With the help of a light and eye vector, the shader can calculate the right coordinates to pick pixels from the cubemaps. This results in a value that describes how much light is reflected on that spot. Basically, its some sort of lighting model like Phong or Blinn but then with more physical correct data. Anyway, now I'd like to combine this technique with an environment map so that I can add some reflections. Instead of using stuff like Fresnell, I'd like to do it with the BRDF. However, there are some important differences though. The default nVidia BRDF approach is made for 1 lightspot (you give one light position) but when using a cubemap, the light could come from multiple directions of course (a env map with a few windows acting as light sources for example). How to handle that? Normally, the vertex shader calculates the BRDF cubemap coordinates with a eye position and camera position:
float3	half	= normalize( lightV + eyeV );
float3	hTex	= float3(
dot(half,iTangent	),
dot(half,iNrm	),
dot(half,iBiNrm	)
);

So I was thinking, what about replacing 'lightV' with the vector used to get a pixel from the cubemap? This code would move to the fragment shader and lightV would be something like "reflect( normal, eyeV )", instead of "normalize(vertexPos - lightPos)". However, this doesn't exactly work, otherwise I wouldn't ask of course :) Anyone an idea how to combine envmapping with the nVidia BRDF approach? Greetings

Share on other sites
cwhite    586
I think the biggest difficulty you will face is that if your material is not perfectly specular, you will need to take many samples to get an accurate result. This might not be feasible for real-time, but I've honestly never tried it in a shader, so I don't know what its performance implications are. The only way to find out is to test it out. If you use no importance sampling, you can just take a bunch of samples of the cubemap and average them together, but it will require MANY more samples than if you figured out an importance sampling scheme. Brush up on Monte Carlo integration to figure this out.

Share on other sites
Cypher19    768
If the cubemaps are predefined, one possibility is to blur them at load/build-time (not shrink them, maintain the constant res for most of it, although, you cna probably get away with lower res's as you blur more and more) and then based on your specular exponent pick the correct cubemap to look up or better yet, lerp between them, almost like mip-mapping!

Share on other sites
spek    1240
Thanks for the help guys! But for now, I think I stick on the good old Fresnell methods instead of BRDF for reflections :)

Greetings