Forward vs. Deferred Rendering

Started by
23 comments, last by swiftcoder 10 years, 6 months ago
Schrompf, in my engine, I write out the ambient term as a pass, before doing the lighting passes.

I don't think deferred shading is only good for non-shadow casting lights, since I use it both with lights that cast and do not cast shadows, the only extra work I do is computing the depth values for the shadow map, and comparing that with the depth values in the shader.

I considered using depth peeling, but it wasn't very efficient, so I do a depth-fill pass and render transparent surfaces using forward rendering.
Advertisement
What kind of performance differences do you guys see between deferred rendering and forward rendering with a depth-only prepass? Obviously it'll depend on the scene, but still curious about different cases.

I guess the forward renderer still needs to transform verts and perform depth testing. But, in both systems you're only shading visible pixels, which is the bottleneck of the app I'm currently working with.

I'd test this myself, but I haven't written my own deferred renderer yet :)

BennyW

One big advantage of using a deferred renderer is the fact that it is much more efficient at handling overdraw. A forward renderer has to transform, apply a material and light an object in one pass. If this object is in the view frustum but obscured by another object, it will spend a lot of time lighting pixels that are never seen by the user. A deferred renderer does not suffer from this because it only lights pixels that are visible to the camera (since the lighting is applied after geo + materials have populated the depth buffer).

There are plenty of reasons why a deferred renderer is preferable to a forward renderer (and visa versa) but as Wolfgang mentioned earlier, which one to use depends on project requirements. As I see it, these are the main reasons to use a deferred renderer:

- Overdraw is much less costly

- Separates geo + material from lighting (reduces coding complexity)

- Reduced number of shader permutations often means less shader changes / uploads.

- Much more efficient at processing large amounts of lights (only pixels that are visible will undergo lighting).

So, if you need heaps of lights in your scene, best use a deferred renderer. If you don't, a forward renderer is likely to be more efficient as it saves on memory and bandwidth. This is generally why they aren't used on older / weaker target platforms...they don't have the memory, the bandwidth or the processing power to realize a large number of lights, so there is little point in going down the deferred route.

Someone please explain me why you don't shade unnecessary pixels with lights. You avoid the common pitfall of calculating a light for the whole object, even if only a small subset of the object is actually affected by the light. Instead, you render a screen quad or maybe a simple mesh resembling the light volume to catch all pixels affected by the light in screen space. How do you avoid shading pixels far away from the viewer?

For example, have a look at the picture I linked to in the other thread. The three lights in front cover alot of screen space. In a deferred renderer the light volume would cover almost the entire screen. The light shader will be executed also for all the pixels belonging to the dark hill behind the house. None of the lights reach this hill, yet the screen quad would try to light it as well, because the pixels are inside the lighted area in screen space. This also counts as "wasted pixels" in my opinion, and quite a lot of them at once.

The simple way out of this is to use dynamic branching to bail out if the attenuation is zero. But this trick can also be used for a forward renderer. Maybe you can use the ZBuffer and a proper compare function to discard all pixels behind the light volume. If you configure the depth test to GREATER_THAN, all pixels behind the light volume are discarded and the hill side wouldn't be lighted. But then you run into problems if your light volume is behind a wall. Either way, with Deferred Rendering you're going to light up as many useless pixels as in a Forward Renderer.

In addition you have to calculate alot of data in the pixel shader that is low frequency enough to be calculated in the VS of a forward renderer. The attenuation being an example. Shadow Map texture coords. And more. This is propably not a problem because you have alot of time to calculate that data until the texture units have fetched all necessary values from the various high precision buffers. Still I think this to be a waste as well.

Bye, Thomas

This is easy, you just need to get the maximum depth bounds of your light volume and use it to set the new far plane as you render your sphere volume (or other approaches of doing a depth-bounds test). This will cull any pixels beyond the light radius.

Other ways include: a form of z prepass, stencil culling + scissoring, and using a "greater-than" depth test + inverse face culling based on interior/exterior positioning of the camera (so that pixels the sphere intersects are the only ones that get lit).

Anyway I think it's a shame that nobody ever considers the other applications of deferred shading outside of lights when they discuss about it's benefits or shortcomings. Think about it, you have scene information for just about anything using a deferred renderer. Screen space ray-marching, SSAO, deferred decals, deferred environment probes, volumetric fog, ambient occlusion volumes, layered effects - to name but a few. It gets especially interesting when you want to do some form of global scene effect, like rain or snow, because you can simply apply it in screen-space over the entire scene in a single pass with no per-object worries or multi-pass issues.

A good example of this is ocean caustics in Crysis 1 - they were done via a secondary geometry pass with alpha blending, which required selecting meshes under the surface or near it that would receive the effect. If the objects bounding box so much as touched the surface, but the mesh didn't, you were applying the caustic effect and the extra geometry pass, wasting performance. In the later games it's done via a single full-screen pass (+ stencil culling of the ocean surface so you avoid rendering it over the rest of the world) and avoids all of those issues.

Even if you do forward rendering you'll likely need some form of thin g-buffer for any of the effects I mentioned above. Even screen-space shadows require world position (can be done via depth map thanks to DX10+, but you get the point). So chances are you'll end up using a hybrid, regardless of which direction you want to go (forward? what about those effects? deferred? what about transparencies?).

Please don't necro posts from 2006...

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement