Deferred Lighting + Fresnel

Started by
6 comments, last by FreneticPonE 11 years, 12 months ago
Hello All.

I have been attempting to create more physically correct rendering pipeline. It uses deferred lighting and a normalized blinn-phong. I have come to a stumbling block when attempting to imeplement the Fresnel effect using Shlick's approximation. I do not want to fake it by using N dot V in the second forward pass. So far my render targets look like the following:

Depth + A: Nothing stored in alpha yet.
RGBA8 - Best Fit Normals in RGB and monochrome gloss map value stored in A, which is used for specular power.

I am using a 16bit floating point per channel render target for my light accumulation buffer since I could not find a decent storage method to use with an 8bit per channel one. All that I could find info on required swapping rendering targets back and forth when you have overlapping lights, which doesn't seem like a good idea performance wise (not tested though).

My first thought was to create a lookup map for specular color. I could store 256 common specular colors within this lookup and store the index into the Depth render targets alpha channel. I could then lookup the color during the light accumulation phase and output the accumulated specular color to yet another render target. This would require another large render target to support HDR values, much like my light accumulation target.

I noticed in Tri-Ace's 2012 GDC slides, they use Deferred Lighting with FresnelShlick but they didn't go into much detail on how they accomplished it in the lighting phase. Does anyone have any brilliant ideas on how I might accomplish this or is my current idea sufficient for a decent range of materials.

Thanks in advanced,

-Mike
Advertisement
Tri-ace doesn't really elaborate on how how their renderer is set up, it's totally possible they do a more traditional deferred rendering approach with a fat G-Buffer. A lot of people tend to be pretty loose with terminology regarding deferred lighting, deferred shading, deferred rendering, etc. When you say "deferred lighting" I assume you're talking rendering a very minimal G-Buffer, rendering lighting to a buffer, and then rendering geometry again which samples the lighting buffer? If so you might want to reconsider ditching that approach and going with a fat G-Buffer. The kind of deferred lighting you're doing has a lot of disadvantages (such as the one you mentioned), and not so many advantages. It can be different on certain hardware where you might have a penalty for using MRT or some extra hardware on which you can parallelize the lighting computations, but if you're not working with them it's really not a compelling choice.
Thanks for the reply. You are correct, by deferred lighting I am referring to what is also called "Light Pre Pass". The same thing Wolf has posted on his blog as well as the technique Uncharted and CryEngine 3 is using. Uncharted applied the fake Fresnel effect using N dot V, and it sounds like the CryEngine 3 did something on the CPU to supply a L.H value to the second forward pass. Not exactly sure what they did there.

The main reason why I chose it was because I would like to get it running on the xbox 360 with decent performance. The traditional deferred rendering with the fat g-buffer will be problematic at higher resolutions on the xbox-360 due to the 10mb EDRAM. Of course, the way people have worked around that is to split the screen in half and render in two passes. I was hoping to avoid that.

The main reason why I chose it was because I would like to get it running on the xbox 360 with decent performance. The traditional deferred rendering with the fat g-buffer will be problematic at higher resolutions on the xbox-360 due to the 10mb EDRAM. Of course, the way people have worked around that is to split the screen in half and render in two passes. I was hoping to avoid that.


If you use XNA predicate tilling is done by the API, if you use Microsoft SDK you have to implement predicate tilling yourself. The XNA team said that a good predicate tilling technique is really fast, it seems that way. But like you, I prefer that the EDRAM holds the whole G-Buffer.

I still didn’t research much about Fresnel effects, and that includes the Shlick technique. But looking in the Crysis 2 resource I see a fresnel_sampler.dds texture and they read it using the following code:


// Cheap shlick fresnel term using lookup table (each channel has a diferent power (x = 1, y = 2, z = 4, w = 5) )
half4 GetFresnelTex( float NdotI, float bias)
{
return tex2D( fresnelShlickMapSampler, float2(NdotI, bias) );
}


Where NdotI is (I think) NdotV (following the rest of the shaders)

This help you?

Moderators: If I put something that I shouldn’t then let me know.

[size=1]Project page: [size=1]<

[size=1] XNA FINAL Engine[size=1] [size=1]>
I have yet to test the built in Predicated Tiling on XNA, so I cannot say how well it works with regards to performance. I currently do not hold a Creators Club membership and have mostly been designing based on knowledge i've read around in the web. I will probably create another renderer that does the fat-g buffer and see if I can measure the performance penalties of perdicated tiling. Worst case, I use the NdotV approach for Xbox with deferred lighting and a fat g-buffer on the PC. Or, if predicated tiling does not seem to be an issue then I will just stick with a fat g-buffer.

Thanks for posting the shader code, it is kinda interesting that they are reading it from a texture as it would seem like it would only support dielectric materials. Is the texture monochromatic?
Having to tile isn't necessarily a deal-breaker. On XNA it's transparent to you, and the overhead depends on the amount of geometry as well as the amount of draw calls you need to make. Of course you should keep in mind that with your current set up you will need to tile at 1280x720 since you have two render targets + a depth/stencil buffer which is 10.5MB.

Thanks for posting the shader code, it is kinda interesting that they are reading it from a texture as it would seem like it would only support dielectric materials. Is the texture monochromatic?


It is very similar wink.png to the attached textured.


Having to tile isn't necessarily a deal-breaker. On XNA it's transparent to you, and the overhead depends on the amount of geometry as well as the amount of draw calls you need to make. Of course you should keep in mind that with your current set up you will need to tile at 1280x720 since you have two render targets + a depth/stencil buffer which is 10.5MB.


1024 x 600 (without MSAA of course) is my happy number for the XBOX (same as COD games). The G-Buffer can hold three 32 bits render target plus a depth/stencil buffer.

[size=1]Project page: [size=1]<

[size=1] XNA FINAL Engine[size=1] [size=1]>

The main reason why I chose it was because I would like to get it running on the xbox 360 with decent performance. The traditional deferred rendering with the fat g-buffer will be problematic at higher resolutions on the xbox-360 due to the 10mb EDRAM. Of course, the way people have worked around that is to split the screen in half and render in two passes. I was hoping to avoid that.


You do know the next Xbox and Playstation are due soon right? Or were you hoping to be done much sooner, and counting on the much larger installed base?
Anyway, from what I remember both Crysis 2 and Halo: Reach shrunk down their horizontal resolutions to something like 1024x720 (something like that) and then rescaled the finished frame to 1280x720 in order to fit everything into EDRAM without tiling. And of course Battlefield 3 went for a fat g-buffer on the 360 anyway, they got it working.

Best of luck either way, and if you want to look at the Crysis 2 stuff Crytek is nice enough to put all their powerpoints and stuff up on their site: http://www.crytek.co...e/presentations

This topic is closed to new replies.

Advertisement