Questions about Deferred Shading

Started by
6 comments, last by Talria 18 years, 5 months ago
Hello I'm thinking about implementing deferred shading in my 3D renderer to speed things up when rendering a scene with multiple light sources. With Deferred Shading, the scene needs to be rendered only once per frame from camera's PoV, storing the geometric attributes like normal and position for each pixel in a texture render target. Then we just need to render one fullscreen quad per each light source, using the texture we rendered to in first pass. Currently, I render as many times per frame as there are light sources and it tends to get really slow as scene complexity increases. Using OpenGL for rendering, BTW. Before I go about implementing this feature, however, I'd like to know if it incurs any serious restrictions. I've heard that doing transparency and alpha blending is really difficult. How about reflective/refractive surfaces? Does it well work with the PCF-filtered soft shadow mapping method?
Advertisement
Hi,

I can't answer your questions, but i've got one to ask you : you're rendering once for each light ? Means that if you have 2 lights, you render your scene twice ?
If I didn't understood what you meant, then sorry. But if I understood well, then you should know that you can render a scene with up to 8 simultaneous lights in only 1 pass with almost any card.
Yes. If I have 3 lights for example, the scene is rendered 3 times from camera's PoV and the contributions of each light source are accumulated in the framebuffer using additive (GL_ONE, GL_ONE) blending.

Unfortunately, rendering multiple lights per pass will probably cause the shader to exceed the maximum instruction count, as certain instructions need to be duplicated for each light source. The 10-sample PCF filtering feature already generates quite a lot of fragment shader instructions.
Quote:Original post by paic
Hi,

I can't answer your questions, but i've got one to ask you : you're rendering once for each light ? Means that if you have 2 lights, you render your scene twice ?
If I didn't understood what you meant, then sorry. But if I understood well, then you should know that you can render a scene with up to 8 simultaneous lights in only 1 pass with almost any card.


Paic, he's including shadows which basically forces an instant 1:1 light:scenepass ratio. Even Doom3 has 1:1 (in the best of times! GF4 is 2:1 and GF3 is 3:1). Btw, even for shadow maps, while it is in theory possible to do multiple lights in a pass, generally it's a bad idea to do so because storing multiple omni SMs will just chew through your memory like nothing, something that already becomes an issue when you have deferred rendering.

Anyways, to do alpha blending I believe you have to render that on top of the added lights in order to get proper results, I'm not sure though because I haven't done alpha in my deferred rendering demo yet.

Shadow maps work EXCELLENTLY with a deferred renderer, since (I'm guessing) you're doing the SMs in a pixel shader already.

Reflective surfaces I haven't thought much on, but I imagine you could either apply those in the G-Buffer pass while doing the ambient light at the same time (if you can spare the MRT) or just composite the reflective objects onto the final image like you would alpha blending.

Refractive surfaces should be a fairly trivial extension beyond adding alpha geometry, since it behaves in a similar way, just with an extra shader to do a screenspace texture lookup.
Well you need a traditional forward renderer for translucent objects. Deferred rendering just doesnt lend itself to multi-layering, and the alternatives (like depth-peeling, and shading layers independently) just aren't fast enough to be useful.

If you also intend to occassionally alpha fade-out otherwise opaque objects, then be prepared for the pain and suffering of tuning your traditional rendering to match the deferred output perfectly, otherwise you'll get popping when the object switches from 100% opaque to 99% alpha.

Antialiasing is also a problem with the deferred model.
Thanks everyone for your input! Alpha blending and deferred shading obviously don't work hand in hand, but I think I can manage without it. I think effects such as glass can be implemented using environment maps and refraction, thus rendering polygons in correct order shoudln't be a concern.

Am I right saying that alpha testing, on the other hand, does work? I mean textures that have either 100% opaque or fully transparent pixels - no actual blending taking place.
Quote:Original post by ZiM
Thanks everyone for your input! Alpha blending and deferred shading obviously don't work hand in hand, but I think I can manage without it. Am I right saying that alpha testing, on the other hand, should work? I mean textures that have either 100% opaque or fully transparent pixels - no blending between fragments taking place.


Alpha testing works with deferred shading without a problem.
Deferred Shading Pros:
- geometry is processed only 1-2 times regardless of lighting complexity (for G-buffer material property rendering and for z-pass)
- only pixels inside light volumes are processed (use light geometry instead of fullscreen quad)

Deferred Shading Cons:
- huge bandwidth requirements (uncompressed G-buffer data is fetched for each light that illuminates the pixel)
- you have to use one homogenous material everywhere, e.g. if you plan to add specular masking you must render it for every material
- like others already mentioned alpha blending and antialiasing doesn't work

You can figure out workarounds for some of the cons, e.g. use dynamic branching to execute different shading model for a pixel or pack the G-buffer material properties to less bits or use indexed material textures in G-buffer or use stippling instead of alpha blending, but those are still fundamental problems in Deferred Shading.

This topic is closed to new replies.

Advertisement