This topic is 3510 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Let's say the transformation pipeline for a regular scene works like this:
       CameraClipSpaceVertex = ProjectionCameraSpace * ViewCameraSpace * ModelingTransform * ObjectSpaceVertex;


The transformation from the light's point of view look like this:
       LightClipSpaceVertex = ProjectionLightSpace * ViewLightSpace * ModellingTransform * ObjectSpaceVertex;


I calculate a depth map using the second transformations sequence. Now, for doing shadow mapping i projectively map the shadow map onto the scene. It is at this moment i start to get a sloppy about how all this works. Here's how i think about it: For every pixel drawn, map it into light's space and sample s/q, t/q from the shadow map and see if r/q > shadowMap(s/q, t/q). But how do you transform into the light's space? I found several approaches, the one i will go with is to go from ViewCameraSpace to LightClipSpaceVertex. Looking to the transformations above it can be seen that the modelling transform is the same so(ViewCameraSpace was on the stack and need to be cancelled):
LightClipSpaceVertex = ViewCameraSpace * InverseViewCameraSpace * ProjectionLightSpace * ViewLightSpace * ModellingTransform * ObjectSpaceVertex


But isn't this transformation the same as the second transformation from above? And if this is so than all the values would be equal to the one's in the shadow map. I'm certainly doing something wrong here, but what?

##### Share on other sites
When you want to project the shadow map onto the geometry you want to render the object with your camera, then, to project the shadowmap onto the object, transform the world space position of the pixel using the light's ViewProjection matrix, and use this projection to sample the shadowmap and then compare the z values.

##### Share on other sites
Quote:
 Original post by glaekenWhen you want to project the shadow map onto the geometry you want to render the object with your camera, then, to project the shadowmap onto the object, transform the world space position of the pixel using the light's ViewProjection matrix, and use this projection to sample the shadowmap and then compare the z values.

Hmm.. that isn't quite clear to me. When you say transform the world space position of the pixel i presume you refer to some shader of some sort but that would be quite expensive to do in the pixel shader. I think that in the pixel shader you just sample the shadow map and perform the distance test.

##### Share on other sites
You can calculate the projected texture coordinates in the vertex shader or the pixel shader. Calculating them in the pixel shader will be slower than in the vertex shader but should be a little more accurate.

The key here is though, you want to transform the vertex with your camera matrices and calculate the projected texture coords with the light matrices.

//assuming you're using column matrices, otherwise swap
ProjTexCoords = LightViewProj * worldSpacePos
ProjTexCoords.xy /= ProjTexCoords.w;

##### Share on other sites
Quote:
 Original post by glaekenYou can calculate the projected texture coordinates in the vertex shader or the pixel shader. Calculating them in the pixel shader will be slower than in the vertex shader but should be a little more accurate.The key here is though, you want to transform the vertex with your camera matrices and calculate the projected texture coords with the light matrices.//assuming you're using column matrices, otherwise swapProjTexCoords = LightViewProj * worldSpacePosProjTexCoords.xy /= ProjTexCoords.w;

You say you transform the vertex with the camera matrices and then you multiply with light's matrices but here:

ProjTexCoords = LightViewProj * worldSpacePos

you take into account only the world space transformation and not the viewing transformation of the camera. Is this a typo?

I was confused and addressed the initial question because of this article. Looking at the picture and at the formula:
(When rendering from the camera these are all happening to the texture coordinates)
LightClipSpaceTexCoord = LightProj * LightView * CameraView-1 * CameraViewTexCoord

I say CameraViewTexCoord because this vertex is calculated with respect to the position of the camera when requesting automatic texture generation.

But isn't this equivalent to:
LightClipSpaceVertex = LightProj * LightView * WorldSpaceVertex

Please bare with me, but i think i'm missing a simple thing down the road.

##### Share on other sites
Not a typo. You want to output from your vertex shader two things: the homogeneous position of the vertex and the projected texture coordinates (or the world space position of the vertex if you want to calculate the projected texture coordinates in the pixel shader).

OutputVS VertexShader(float4 position : POSITION0){   OutputVS OUT = (OutputVS)0;   //get the world space pos   float4 posW = mul(ModelMatrix, position)   //transform to homogeneous space   OUT.posH = mul(CameraViewProj, posW);   //calculate projected texcoords   OUT.projTexC = mul(LightViewProj, posW));}

Then use the projected texture coordinates to sample the shadow map and compare z values.

##### Share on other sites
Quote:
 Original post by glaekenNot a typo. You want to output from your vertex shader two things: the homogeneous position of the vertex and the projected texture coordinates (or the world space position of the vertex if you want to calculate the projected texture coordinates in the pixel shader).*** Source Snippet Removed ***Then use the projected texture coordinates to sample the shadow map and compare z values.

I finally understand. Thank you very much!!

[Edited by - Deliverance on May 2, 2009 12:55:45 PM]

1. 1
2. 2
3. 3
Rutin
15
4. 4
khawk
14
5. 5
frob
12

• 9
• 11
• 11
• 23
• 12
• ### Forum Statistics

• Total Topics
633660
• Total Posts
3013221
×