This topic is 2404 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi, I am currently trying to implement deferred shading in DX9.0 when I encounter some problems and queries with lighting/shadowmapping. I understand that a deferred shading generally works as:
Render to g-buffer
For each light:
Use g-buffer to calculate result and merge with frame buffer.

However the article 6800_Legues_deferred_Shading.pdf, wrote to keep diffuse and specular separate and then merge them to a framebuffer as a final pass. Does that mean I have to have actually do:
Render to g-buffer
For each light:
Calculate and write:
Color0 = Diffuse
Color1 = Specular
Do a final pass after all light calculation to merge value

Another problem I have is with shadow maps. How do I integrate them? I was thinking that I need to generate them before creating the g-buffer. But how do I use them? Say I have 3 shadowmap, how do I write the information to g-buffer telling it whether the pixel is shadow? Thanks

Share on other sites
Quote:
 Original post by littlekidAnother problem I have is with shadow maps. How do I integrate them? I was thinking that I need to generate them before creating the g-buffer. But how do I use them? Say I have 3 shadowmap, how do I write the information to g-buffer telling it whether the pixel is shadow?
I use Horde3D, which supports deferred lighting.
Off the top of my head, their algorithm is:
Render scene to g-buffer (from cameras perspective)For each light:    Render scene shadow-buffer (from lights perspective)    Use g-buffer and shadow-buffer to calculate result and add to frame buffer.

Share on other sites
Quote:
 ...Use g-buffer and shadow-buffer to calculate result and add to frame buffer

But how do I know which shadowmap buffer to use? If I have 3 lights, means 3 shadowmap buffer. How do you identify which pixel use which shadowmap?

Share on other sites
Sorry, I should have explained that better.

There is only one shadow-buffer, not one-per-light.

After the G-Buffer has been filled in, the frame-buffer is cleared to black and each light is processed in turn.

When processing a light, first it's shadow map is generated, and then this temporary-shadow-map and the G-Buffer are used to produce the lighting values for that light-source, and these values are additive-blended to the frame-buffer.

Pixels that are in shadow will add (0,0,0) to the frame-buffer (i.e. have no effect).

So with two lights it looks like:
Render scene to g-buffer (from cameras perspective)Clear frame-buffer to black    Render scene to temp shadow-buffer (from light 1's perspective)    Use g-buffer, temp shadow-buffer and light 1's properties to calculate result and add to frame buffer.    Render scene to temp shadow-buffer (from light 2's perspective)    Use g-buffer, temp shadow-buffer and light 2's properties to calculate result and add to frame buffer.

Share on other sites
Hodgman very nicely described the standard way of doing it, which I think is the best one, too.

The "6800 leagues" recipe that you mentioned should not be confused with that. It is a different approach, which involves a little trade-off (monochromatic specular lights) but supposedly offers better memory bandwidth.
However, I honestly don't see how it is any better, in my opinion it is only worse. You certainly save some texture reads (albedo color) from the G-Buffer while calculating lights, but you pay for that with an even higher number of additional pixel writes (which is twice bad).
I've asked if someone could explain how this approach could be superior a few weeks back (since I really don't see it), but to no success.

Share on other sites
Thanks I think I am getting it.

However won't it be expensive considering I have to redraw the geometry for the number of lights? Even after my Light Tree culling, I might be left with around 1-4 lights.

Share on other sites
Quote:
 Original post by littlekidHowever won't it be expensive considering I have to redraw the geometry for the number of lights? Even after my Light Tree culling, I might be left with around 1-4 lights.

This is no different than forward rendering... you need a shadow map for each light that you want to cast shadows; there's no getting around that. Indeed with deferred rendering you save memory since you only need one shadow map at a time.

Share on other sites
Quote:
 Original post by littlekidHowever won't it be expensive considering I have to redraw the geometry for the number of lights? Even after my Light Tree culling, I might be left with around 1-4 lights.

When drawing the geometry for each light, all that needs to be calculated is the depth values (for the shadow map). This is super fast compared to regular rendering with complicated shaders etc...

The up-side is that once you've got your G-buffer (and shadow buffer), (almost) no geometry needs to be drawn to do the lighting calculations.

To make an unfair example, lets say you have 100 point lights that don't cast shadows.

With forward shading, you can only handle a certain amount of lights with each pass. So you might have to draw the entire scene anywhere from 10 to 100 times.

With deferred shading, you only draw the scene once (to G-buffer), and then for each light you just need to render a quad that encompasses the area of the screen effected by that light.

Obviously, once you take shadows into account it's a bit more complex:
Forward = 100 Shadow-buffer passes + ~10 to ~100 geometry passes.
Deferred = One geometry pass + 100 Shadow-buffer passes + 100 light passes.

The best lighting approach depends on many things, such as how many lights you can process in a forward-shaded pass, how many lights need shadows, how big the area of effect of each light is, how many lights you want visible at once, the required resolution of the shadow buffers, whether you have lots of transparent geometry, if support for old hardware is required, etc...

[Edited by - Hodgman on February 13, 2008 10:16:26 PM]

Share on other sites
Quote:
 Original post by HodgmanWith deferred shading, you only draw the scene once (to G-buffer), and then for each light you just need to render a tiny sprite that encompasses the area of the screen effected by that light.

I don't really get what you mean by this sentence. Don't we draw the whole full screen quad for every light we process?

Share on other sites
Quote:
 Original post by littlekidDon't we draw the whole full screen quad for every light we process?

Sorry, I'm getting ahead of myself again!

Yes, in the basic implementation you draw a full-screen quad to calculate the lighting.

This can then be optimised though - if you find the bounding-sphere / bounding-box of the light source, and then find the top-left/bottom-right extents of this bounding-area in screen-space, then instead of drawing a full-screen quad, you can draw a smaller quad that only covers the area that the light will affect.

This optimisation is very important if you have lots of small light sources.
If you've only got a few large light sources which are going to cover the whole screen anyway then there isn't much point in doing this though.

Another form of this same optimisation is to use 3D geometry instead of a screen-space quad.
E.g. for a spot-light, you would draw an actual 3D cone that encompasses the area effected by the light-source - this way (assuming the frame-buffer's z-buffer is correct) you can even further cut-down the number of pixel operations required, as some of the pixels will be rejected by the z-test before the pixel-shader is executed.

1. 1
2. 2
Rutin
18
3. 3
4. 4
5. 5

• 9
• 9
• 14
• 12
• 10
• Forum Statistics

• Total Topics
633271
• Total Posts
3011162
• Who's Online (See full list)

There are no registered users currently online

×