Optimized deferred lighting....algorithm question

Started by
11 comments, last by Hodgman 10 years, 10 months ago

Still SO close to finishing optimized deferred lighting.

Using the "standard" deferred algorithm of multiple render targets, etc. With shadows. Pretty cool. So in brief I calculate shadows depth, then render the lightmap to a rendertarget, copy the RT to Texture2d, send the Texture2d to the pixel shader for the next lightmap, combine the existing light with the new light, repeat for each light.

I'm trying to clip my draw quads to the lights for optimization. This means that I can't buffer my light maps to texture2d and send to the light map pixel shader because the light map will only draw within the bounds of the clipped draw quad, thereby clipping out the existing light.

Seems like I need to draw the first clipped light to rendertarget, copy to full screen texture2d buffer, DON'T send the buffer to the pixel shader, draw clipped light 2 to rendertarget without existing light, then send both the new rendertarget and the existing texture2d buffer to a pixel shader which will combine the two in a full screen draw, copy the whole shootin' match to a full screen buffer, repeat for each light.

By my very rough calculations it seems like both algorithms will take about the same amount of time, maybe a small optimization per light but nothing that will add up to any significant performance.

Any opinions on this? Suggestions for a better algorithm for clipping light draw quads in a deferred renderer for optimization?

THANKS

Advertisement

Why don't you just use additive blending to combine the results of subsequent lighting passes?

When you say 'a rendertarget', line 2, what size is that rendertarget ?

And why do you then copy that rendertarget to a texture? It's a texture already! ... you could use BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,

I don't understand what you mean by 'Im trying to clip my draw quads to the lights'. What draw quads do you mean ? And why use clipping ?

Each light will be added to your 'final light map' using additive blend, any light already added shouldn't be clipped out, it's in the light map already. I suspect you're doing too much work, particularly with the clipping. Look at why your system needs to clip to function. Your light geometry should match the size of the light, so no clipping is necessary.

It's the summed render surface size that will cost you (assuming draw calls aren't an issue), so any optimization will look at reducing that total draw size. So never render a fullscreen texture if you can do it with a light-size texture. Those oversize texture's add up. I'm sure that's why you're using clipping. Maybe you just need to disable clipping when adding your lights to your final light map.

This is a learning process for me, so forgive any misconceptions.

I believe the basic algorithm for deferred lighting is render light pass to RT -> Copy RT to existing light buffer (texture) -> Send existing light buffer to pixel shader -> render next light combining existing light to RT, repeat.

Copy to texture is necessary because in XNA when you reset the render target to RT you wipe out the contents of that RT (unless you're preserving contents, which I don't want to do). Resetting the rendertarget to RT is necessary because the shadow algorithm requires setting the rendertarget to a depth buffer RT, so the RT constantly switches back and forth.

I'm not making any of this up....culled from the likes of Reimer and Catalin.

I"m not using light geometry for the same reason....subsequent light pass needs to include existing light and drawing only the current light geometry will clip the existing light. Likewise I don't believe that additive blending is possible because the RT contents are wiped out when setting the rendertarget. So you copy the RT to texture and send the texture to the pixel shader, again all of this pulled mostly from Reimer samples.

I actually have a pretty quick running engine, with shadows, and am ready to wrap up deferred lighting and move on but I want to optimize as much as possible, mostly so I can call this phase Done DONE and won't have to revisit it later. After culling and reducing the resolution of the RT's/existing light buffers the only thing left seemed to be clipping the light draw quads to the size of the actual light on screen, therefore only drawing into the screen area affected by the light while still combining with the existing light. Without losing shadows, limiting the number of light sources, or using a ridiculous amount of memory.....so reusing the RT/buffer texture rather than using one RT per light and then combining.

I'm probably misusing the term "clipping". I mean resizing my draw textures to clip everything outside of the screen area affected by the light.

As Gavin points out, I'm trying to reduce the total draw size while avoiding full screen draws, yet retaining shadows and combining with existing light, where light geomtery and additive blending don't seem to be an option.

Also as Gavin points out I suspect that I'm doing too much work.....so my basic question is, "Is there an efficient way to do this, or is it not worth the effort?".

Again, I am definitely not an expert. For example:

> you could use BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource

I have no idea what this means. WIll find out.

Thanks for your replies.

I think you'll need to work out how to use additive blending. I can't really help you there, I haven't used XNA for a couple of years, Im using SlimDX and SharpDX these days. But if there is some restriction on using a blend state, then you'll have to get around that restriction. Because blending is definitely the method you need to use. Otherwise, how can you add one light to another ?

Edit : BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource ..... just sets a texture to be used as both a rendertarget and shader resource, so both input and output,

so a texture can be written to and then used in the next pass as a source texture.

I believe the basic algorithm for deferred lighting is render light pass to RT -> Copy RT to existing light buffer (texture) -> Send existing light buffer to pixel shader -> render next light combining existing light to RT, repeat.

Copy to texture is necessary because in XNA when you reset the render target to RT you wipe out the contents of that RT (unless you're preserving contents, which I don't want to do).

That's not at all the standard algorithm -- there shouldn't be a need for any copying of data around; it sounds much more like an XNA-specific workaround.
Why don't you want to preserve the contents of your render targets when binding them? It sounds like all your copying is just manually implementing/emulating this "preserve contents" behaviour...

n.b. on PC, a "preserve contents" operation will be absolutely free, because XNA is built on D3D9, and D3D9 does preserve the contents of render targets.
This behaviour of XNA only exists to make it behave like the 360, which by default doesn't preserve the contents of render-targets when binding them, and does require a copy operation.

I'm not experienced in XNA, but the standard algorithm simply renders each light pass to the same RT using additive blending, so that every light gets added over each other. No copying or passing previous results into each new light involved.

> it sounds much more like an XNA-specific workaround.

This absolutely IS an XNA-specific workaround. I am using XNA!

I may be misconcieved here....but I am avoiding using preserve contents because I have an eye towards porting to <as yet unnamed> platforms, particularly Windows Phone 8, which I am barely familiar with (or even Windows Phone 9...ie future mobile platforms will undoubtedly have the graphic power eventually.)...DEFINITELY moving to SharpDX & MonoGame asap, and yes I have read that preserve contents is not XBox friendly....so trying to keep my options open for unknown platforms. Too much for one person to know and by the time I'm ready to port everything may have changed. BUT without preserve contents RT's in XNA are volatile and the copying mechanism has been used by people who are a lot better at this than I am. This is the first situation where I haven't been able to get around the preserve contents issue with a creative algorithm....as I said, still learning (yes, I know, learning a dead platform)....which is why I started this post. The default behavior of DX9 re: preserve contents is not a concern because one of the few things that I do know for sure is that I'll be moving away from DX9 soon, and I am married to C# as being only one person development is SO much faster than C++.

SO....is preserve contents going to be an issue say with Monogame for WP8? Hope that's not a dumb question, again I've barely explored the platform.

Appreciate your comments, definitely food for thought.

The point is you shouldn't need to do any copying of data (and thus have no need for "preserve contents"). Lights can be additively blended. The part in your shader where you "combine the existing light with the new light" can be expressed as an alpha-blending function, correct? (Can you post the code?)

So just keep the same render target bound and keep drawing your lights into it.

XNA only clears your render targets when you bind them. If you just bind your target once and you don't re-bind it between lighting passes, it shouldn't be cleared. However you will have to make sure that you use a render target format that supports blending (I believe there's one called HdrBlendable).

Unfortunately the shadowing algortihm requires rebinding the render target.

This topic is closed to new replies.

Advertisement