Sign in to follow this  
xXShadowAsasNXx

Deferred vs. Light Pre Pass renderer

Recommended Posts

Alright, so i am creating a game engine which was originally planned to support a forward renderer, but i have done a little bit of research and i think that one of the methods(in the title) are the way to go... Only problem is, i dont know which to pick. Anyone want to convince me to use one or the other? Tell me the pros and cons of each, and anything else you want to say.

Share this post


Link to post
Share on other sites
I'm also just considering which one to use, but I tend to use the light prepass renderer.

One reason being that MSAA support isn't that trivial using a deferred solution as well as the high memory requirements of the G-Buffer. Additionally using materials is easier using a forward renderer (which you would with light pre-pass).

Still it depends on your design and needed features. If you already hava a z-prepass you might also consider the light prepass renderer.

Note that there's a third solution: light indexed rendering.

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
One reason being that MSAA support isn't that trivial using a deferred solution
This is something Wolfgang Engel came up with as in "yeah, MSAA certainly works fine", but which I honestly never understood (or, as Woody Allen would put it: "Everything you always wanted to know, but were afraid to ask...").

You're rendering depth and normals into a buffer, then calculate all the accumulated lighting (independent of albedo) with a fullscreen quad, and finally render the geometry a second time as with forward rendering, multiplying albedo with the now already present lighting information.

Now, this will certainly make use of MSAA to render the geometry (if enabled), but your subsample shaders can still only read from the lighting texture what's actually there. The eye is much more sensitive to light than to colour (many television standards and image compressing schemes exploit that fact), so getting the light right is more important than the albedo.
Thus, if the final pass is multisampled, the lighting texture would consequently have to be multisampled too, and thus would the depth/normal texture used to generate it.

I may not understand the issue right... but in my opinion that leaves you more or less with the same main problem as deferred shading with supersampling does, fat buffers eating up a lot of memory?

The only real advantage I see is that the hardware helps a little with with the multisampling, saving a few shader cycles when blending in the lighting, as it will only read subpixels near polygon edges, not everywhere. But then again, it re-renders the entire geometry for that, which is somewhat a tradeoff.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this