## Recommended Posts

hick18    102
Ive been reading this tutorial. Am i right in thinking that when using this method, you have to draw the scene for each shadow map? Maybe im getting this wrong, but that sounds more expensive. As if im rendering with deferred shading and deferred shadows, with 4 lights, i get 1. Render whole scene to G-buffer 2. Render shadow map for light 1 (involves re-rendering the scene in light space) 3. Render shadow map for light 2 (involves re-rendering the scene in light space) 4. Render shadow map for light 3 (involves re-rendering the scene in light space) 5. Render shadow map for light 4 (involves re-rendering the scene in light space) 6. Combine all shadow maps to occulsion map 7. Calculate lighting So if i had skinning being calculated, the code would need to be carried out muliple times for each light. He mentions using the depth information from stage one to reconstruct world positions for calculating the shadows maps so that you dont have to re-render the scene. But i dont see how that would work, as the depth map only contains objects relative to the Camera. But the lights may see all sorts of objects that would get "z depthed out" in eye space. Also, am i right in thinking that each effect(forward rendering style) is going to need 2 techniques, one for normal rendering, and one for z-only rendering. So for example, a skining shader would have the same vertex shader, but a different pixel shader, that just outputs z depth.

##### Share on other sites
Cypher19    768
Quote:
 So if i had skinning being calculated, the code would need to be carried out multiple times for each light.

Yes.

Quote:
 He mentions using the depth information from stage one to reconstruct world positions for calculating the shadows maps so that you dont have to re-render the scene

What he's referring to is this:

-Then you project it onto the scene from the camera's point of view. At this point, you can use the depth-G-buffer to reconstruct the world position of the pixel in question (i.e. calculate its world position, project the shadow map coordinates onto it, read the shadow map accordingly to a distance value, then calculate the distance of the pixel's world pos relative to the light, then do the compare)

Quote:
 Also, am i right in thinking that each effect(forward rendering style) is going to need 2 techniques, one for normal rendering, and one for z-only rendering. So for example, a skining shader would have the same vertex shader, but a different pixel shader, that just outputs z depth.

Effectively, yes. You can do some ninjaing around this though, by, for example, writing some code that would generate the pos/z-only version of the shader for you (e.g. grabbing the ASM, and analyzing everything that contributed to oPos).

##### Share on other sites
hick18    102
I really dont like the idea of having to render the same geometry for each light, specially when it comes to the skinned geometry. And im wondering, why not have the g-buffer stage also create the shadow maps? When filling the g-buffer, we already have the position of the fragment in world or view space, so all i have to do is convert to light space with a matrix multiplication and render it to a shadow map. so im thinking about this following method, for a renderer that supports 4 shadowing lights

Perform the G-Buffer filling stage
- Carefully select 4 lights that actually cast shadows, and also will add the most shadowing contribution.
- Bind 4 shadow maps as rendertargets
- for each Shader, pass in the 4 lights data, fill the G-Buffer with the required data by rendering all objects using that shader, but will also render the position of the current fragment to each lights shadow map in light space
- We also render transparent objects here too, but we only add the data to the shadow maps and the deph buffer, not the G-buffer, so that they still cast shadows.
- Unbind shadow maps as render targets and bind as shader resource
- Bind the depth map as a shader resource
Render lighting
- Render all lights as geometry using shadow mask and g_buffer to calculate light contribution
Do forward rendering
­- Do rendering for transparent objects

This way im only rendering geometry once. All the shadow maps are created using basic shadow mapping. And then it is up to the shadow mask creation stage to create the shadow mask with PCF or some other sampling method. This method will require that all my shaders have the ability to render shadow maps, but i would also have to do this with the method i suggest in my first post too. I will also have to specialize for my sun light though, as that will be using Casaded Shadow maps.

What do you think of this method?

##### Share on other sites
osmanb    2082
This isn't really possible. You're talking about performing multiple (almost entirely different) rasterizations during a single draw call. That might be possible with the latest DX stuff (using the same tricks as render-to-cubemap), but anything that's widely available can't do it. The rasterizer draws a single polygon, and does it from one perspective.

It might sound terrible, but every game that uses shadow maps renders the scene multiple times. It's not nearly as expensive as you think. Your normal scene render probably takes a while because of your pixel shaders, but shadow-map rendering is comparatively fast, because you're just writing depth. Many cards even have a double-fill-rate mode if you're only outputting depth.

##### Share on other sites
hick18    102
I dont know what you mean, ..single draw call?

The scene is rendered like any other deferred renderer, with all sorts of shaders for different vertex processing, (skinning, projection etc) but it also renders the shadow map at the same time, using the light data and world position data that is already avalible in the pixel shader.

Why render the scene again for each light to build the shadow map for that light, when you can build all the shadow maps at the same time in the pixel shader of your "g-buffer filling shaders"?

so something like this -

{
calculate skinned vertex position
use position and light1 data to write shadow map for light 1
use position and light2 data to write shadow map for light 2
use position and light3 data to write shadow map for light 3
use position and light4 data to write shadow map for light 4
}

{
calculate vertex position
use position and light1 data to write shadow map for light 1
use position and light2 data to write shadow map for light 2
use position and light3 data to write shadow map for light 3
use position and light4 data to write shadow map for light 4

}

etc etc

##### Share on other sites
TyrianFin    122
hmm...

just by usin points visible to camera.

But with littlebit of magic you can use cam points as
with good filtering.

http://www.tml.tkk.fi/~timo/publications/aila2004egsr_paper.pdf

/Tyrian

##### Share on other sites
Atrix256    539
You can do a depth only rendering of your scene (ie disable textures, disable color buffer writes, turn off lighting etc) and that is pretty darn fast.

As i understand it, this is how people do it so that they can have multiple shadow casting lights in real time

##### Share on other sites
osmanb    2082
I'd suggest you try to implement what you're describing. The issue I mentioned will become incredibly clear, very quickly. Basically, you can't be drawing independent geometry to MRTs from a single draw call. Shaders just don't work like that.

Imagine the draw call that renders your main character's body. For the game camera, that's going to transform his verts into the camera's view space. The card is going to rasterize those triangles into little 1-pixel sizes chunks, from that EXACT perspective. How are you hoping to fill the shadow maps (which are from different position, orientation, and possibly not even the same type of projection) in that same draw call. They're probably viewing a different set of triangles on that character mesh (because the light is behind the character or whatever). No matter what you do in the shaders, you can't change the fact that the rasterizer/interpolators only works from the *single* set of position values that comes out of the vertex shader.

##### Share on other sites
MJP    19754
Unfortunately there's no way of getting around the "re-drawing all of your scene geometry" part of shadow maps...if there were we'd all be doing it. :P

I really suggest that you do some investigation and profiling before coming to any conclusions about performance. Like others mentioned, rendering to the shadow map can be really quick. The pixel shader you're using should be dead-simple (and your vertex shader should be as simple as possible), and I think you'll find that modern GPU's are quite happy to render "dumb" pixels in this manner. In fact on the PC you're most likely to be limited by the CPU overhead from extra Draw calls, but you can minimize that with instancing techniques (you should be able to batch quite nicely when rendering shadow maps, since you don't have to worry about materials). GPU's also have double-speed z-only writes, which you can take advantage of if you use vendor-specific shadow-mapping extensions (not available through XNA, unfortunately).

For skinning there are ways to prevent having to go through all of the vertex processing multiple times for each mesh, which usually involve somehow creating a vertex buffer containing pre-skinned vertices. In DX10 you have stream out which makes this pretty simple. In DX9 you have to either do it on the CPU, or use the ATI-specific R2VB extension.