Sign in to follow this  

help understanding deferred shading

Recommended Posts

svnstrk    133

i just read a paper about deferred shading, well more to skim than read :D
as i understand, the point of this shading is it copy the position information of each pixel on the screen, render it as a texture, and use this texture to calculate the lighting intensity on each pixel.

now im kinda new to glsl too but isnt it the same as doing per pixel lighting in glsl? so what i have in mind is the vertex shader send the position to the fragment shader (via gl_vertex) and by sending the lights position to glsl (via uniform variable or whatever) we calculate the value of each pixel.

doesnt it the same concept or did i miss anything? is yes, how can defffered shading boost up the performance vs let say per pixel lighting?

thanks in advance

Share this post

Link to post
Share on other sites
DarkChris    110
You would have to render every light with every object in the scene with standard forward rendering... Also you would lighten objects that can't even be seen... It's a total waste of performance... So deferred rendering or light pre-pass which seems to be the better solution, solve this problem by rendering all the necessary information needed for the lighting into a so called g-buffer. And since the pixels you see in the g-buffer, are the only pixels needed to be lit, you never illuminate pixels that aren't seen in the final frame.

Light Pre-Pass means that you render your g-buffer, but only keep information in there needed to calculate the lighting and than combine all that afterwards in a full 3D-pass again. So every object needs to be projected twice. Light Pre-Pass is absolutely necessary if you want to have Precomputed Radiance Transfer for example, since you don't want to keep all your spherical harmonics coefficients in the G-Buffer.

Share this post

Link to post
Share on other sites
MJP    19754
Deferred rendering fundamentally changes the way you handle lights, and thus fundamentally changes the performance characteristics. Depending on your renderer and your scene, the change can be very positive or very negative. First consider the following regarding light performance with forward rendering:

1. Light calculations are done in the fragment shader executed for your rasterized scene geometry. Even assuming no overdraw at all for opaque geometry (through early Z pass + perfect Z-cull, occlusion culling, etc.), your performance can still be negatively affected by rasterizing many small triangles. Fragment shaders are always executed in 2x2 quads, and small triangles that don't occupy the full quads will cause wasted fragment shader executions and thus poor efficiency.

2. Generally your light visibility culling (determining which fragments need to be lit by which lights) can only be done at the granularity of a single draw call, which typically means at most a single mesh. So basically you try to figure out which lights to use when shading a mesh, and you can only say "I'm going to apply this light to this mesh" or "I'm not going to apply this light to this mesh", which means every fragment generated for that mesh will need to perform the lighting (and possibly shadowing) calculations. This gets even worse if you want to batch meshes together into a single draw call to avoid CPU overhead. It's possibly to render a light volume so that you can stencil out the pixels that need light shading, but that means drawing meshes multiple times and possibly having multiple stencil buffer clears.

3. It's possible to perform all lighting and material calculations in the fragment shader when rendering a mesh. This means MSAA works and you don't need any intermediate buffers, which is good. On the flip side, it also means your fragment shader can be very heavy. Which means that overdraw or wasted fragment executions will be painful. Early z pass can eliminate most of it, but early z-cull is not perfect and efficiency will depend on not just the hardware but your scene and the contents of the depth buffer (for instance, using alpha-testing/discard can really screw it up).

Now let's consider deferred rendering:

1. Lighting is performed in its own pass, completely decoupled from the geometry. This means that fragment shader efficiency for lighting is no longer tied to your scene geometry. Simple quads or bounding volumes can be used when rendering the lighting pass, which result in high fragment efficiency.

2. Since lighting is done in screen-space with no regard to the scene geometry, light visibility culling can be done per-fragment using bounding geometry and z/stencil test. This makes lots of small light sources (think christmas tree lights, or particle lights) actually viable in a deferred renderer, since you only really have to worry about the size of a light source in screen space. It also means that you can batch together your meshes without any regard to which light sources affect them.

3. For "classic" deferred rendering, you need to store all material and surface parameters in intermediate buffers (which then get sampled for each fragment shaded in the lighting pass). This can make the technique very memory and bandwidth heavy. It also makes it initially incompatible with MSAA, since you're splitting the pipeline into two parts. Getting around that requires more advanced hardware features that weren't really available until D3D10/D3D10.1-class hardware. But even if you do it right and make it work, you still have to keep in mind that you're storing and sampling 3 or 4 render targets + a depth buffer that are stored at sample resolution. That's a lot of memory and bandwidth. Light prepass trades off G-Buffer storage for a second geometry pass, which may or may not improve your performance depending on your situation.

So that's pretty much the major gist of it. There's also organizational/implementation concerns (deferred rendering can initially make it much easier to organize your shaders and rendering pipeline since lighting + shadiwn is decoupled from geometry), but I won't go into detail on those. If you're still not sure whether or not DR would be a win in your case, I would suggest that you only consider it if you really need lots of dynamic light sources in your scenes, particularly small light sources. Otherwise I would not recommend it, since it will probably just give you more of a headache than its worth and possibly also make your performance worse.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this