multipass rendering for shadowmapping

Started by
6 comments, last by cignox1 17 years, 11 months ago
Hi, I was thinking about how to render more than one shadow map but still leaving at least 3 texture units free and I came up with the following idea: -Render up to 4 shadow maps as usual. -Render the scene in a frame buffer object with a specific shader. The shader will use the 4 shadow maps to compute, foe each pixel, how much that pixel is shadowed. This is the same thing you would usually do, but instead than using the computed values to calculate the lighting equation, I encode those values as RGBA (R = first map, G = second map and so on) and write the resulting vector in the frame buffer object. -Render the scene again binding only the just rendered buffer as texture (so the other 3 are free for other uses). The values from the texture are read by the shader that uses those values to compute the actual lighting equation. This method would let the second pass to use coloured lights and still be able to use 4 shadow maps + 3 textures for other pourpose. My question is: is this method already used? Someone can spot backdraws and weak points, if there are? I already implemented this, and it works. Someone can point me to some references that cover this method, in order to improve them (i.e. using the first pass for early z culling and so on). Thank you!
Advertisement
small question, when you render 4 shadowmaps into a single texture, you simply mean using 1 channel per pass per light, and mask out the other channels while rendering, or do you render 4 shadow maps in a single pass ? (wouldn't see how you can do this ...)

Regards,
Kenzo
Quote:Original post by Guoshima
small question, when you render 4 shadowmaps into a single texture, you simply mean using 1 channel per pass per light, and mask out the other channels while rendering, or do you render 4 shadow maps in a single pass ? (wouldn't see how you can do this ...)

Regards,
Kenzo


I don't understand what you mean... The shadow maps are rendered as usual. Then I render the scene from the eye point of view. With the usual method, I would need one texture unit per shadow map. But then I don't have units left for texture mapping, bum and so on.
Using two pass works this way: in the first pass I use 4 texture units for the 4 shadow maps. The shader will compute the shadow value (if the pixel is shadowed, or how much if in penumbra by filtering, PCF and so on). Instead than using this value to calculate the pixel color, it simply stores it in a frame buffer object (that is, you set a texture as render target). You can use RGBA channels to store up to 4 shadow maps coefficients. In the second pass, I need for shadowing only one texture unit: I pass to the shader the texture just rendered (in the first pass), and the shader will read these coefficients to calculate the color of the fragment. for light one it will use the channel R, for light 2 the G and so on with B and A (you can of course use fewer shadows if you don't require four).
On a 8 texture units card you can use,i.e, 2 units to store 8 shadow maps, thus having 6 units free for other pourposes.
aha .. you simply first render the shadow maps the normal way, but you perform an extra pass to merge the shadow results of the different shadows together into a single shadow map which you can then use in a final illumination pass.

Regards,
Kenzo
Hi there cignox1,

I've been working on the same thing, but want to accomplish it in texture space.

When compiling the 4 shadow maps into the RGBA texture, if the scene has an extra set of lightmap coordinates, the 4 shadows can be rendered to a PBO in texture space (replacing the vertex position with texcoord[1]*2.0-1.0), and that texture can then be used in much the same way as a static lightmap. The benefit of this is that in your shadow combining pass, you can modify the resolution of the map if you want, so blocks of scene further away can use lower resolution maps.

Does this make sense to you? Or am I missing something?
Quote:Original post by Lopez
Hi there cignox1,

I've been working on the same thing, but want to accomplish it in texture space.

When compiling the 4 shadow maps into the RGBA texture, if the scene has an extra set of lightmap coordinates, the 4 shadows can be rendered to a PBO in texture space (replacing the vertex position with texcoord[1]*2.0-1.0), and that texture can then be used in much the same way as a static lightmap. The benefit of this is that in your shadow combining pass, you can modify the resolution of the map if you want, so blocks of scene further away can use lower resolution maps.

Does this make sense to you? Or am I missing something?


Sorry, I understood quite nothing of what you said (don't worry, my fault: I'm a beginner with ogl):
1)"if the scene has an extra set of lightmap coordinates": how were them calculated?
2)"the 4 shadows can be rendered to a PBO in texture space": what's a PBO? I know VBO and FBO, never heard PBO before...sorry :-( How would you render it in texture space? My math sucks, could you explain it to me? If texture space == tangent space, then which object would you choose?

Sorry for confusing you mate, i'll explain...

I mean't FBO, not PBO. And i've just google texture space - it's not what i'm doing, forget that.

anyway...

Before doing anything in GL....

I have a mesh in studio max. Unwrap it using the automatic unwrap modifier, the way you would when creating a static lightmap. Store these texture coordinates in a uvw channel.

In GL....

For 4 lights....


1. Calculate and store the shadow matrix for each of 4 lights.
2. Render to depth texture for the 4 lights as usual.

heres the main bit...

Have a color buffer setup (say 1024x1024).

Setup the vertex shader to have
   gl_Vertex = vec3(texccord.x, texcoord.y, 0)*2.0-1.0


And do your lighting as usual in the fragment shader, for each light, passing the shadow matrices as either uniform variables, or using the GLSL texture matrices (4 on a geforce6600).

3. bind the 4 depth textures.
4. render the scene to the frame buffer.

This will give you a 1024x1024 texture containing all your shadows and lighting.

Due to changing the vertex position to the texture coord position, it will create a lightmap for you.

5. Now, if you bind the texture you have just created, and use the texture coords created by unwrapping the mesh, all the lighting and shadows will be applied in the 1 pass.


Hope that clarifies it a bit...
Quote:
And do your lighting as usual in the fragment shader, for each light, passing the shadow matrices as either uniform variables, or using the GLSL texture matrices (4 on a geforce6600).

3. bind the 4 depth textures.
4. render the scene to the frame buffer.

This will give you a 1024x1024 texture containing all your shadows and lighting.

Due to changing the vertex position to the texture coord position, it will create a lightmap for you.

5. Now, if you bind the texture you have just created, and use the texture coords created by unwrapping the mesh, all the lighting and shadows will be applied in the 1 pass.

This seems a bit strange to me... I mean, in (4) do you render the scene from the eye pov? if so, I don't see why you use these texture coordinates... they wont match with the scene created.
In addition, when you render the second pass you have this lightmap, but you have lost the informations about the shadow of each single light: you cannot do per pixel lighting, you cannot use coloured light and so on.

This topic is closed to new replies.

Advertisement