This topic is 2389 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

The concept of shadow mapping seems fairly straight forward to me so far. You render depth information to a texture from the view of the light source, then do a depth test when rendering from the view of the camera. The only thing I can't seem to understand is how you compare the depth between the view of the light and the view of the camera so you know which pixel belongs in shadow or not. I feel like I am missing something simple here. Could you guys help me out? Thanks.

##### Share on other sites
Why can't all of the other "tutorials or guides" ive seen just be straight forward like that? Ya there are better techniques available, but i want to get shadow mapping working BEFORE i get all fancy with it. Thank you!

##### Share on other sites
Ok, so I have been doing some refactoring to include my own light class and implement it in my render loop ect... One question, what exactly do you mean divide the result by W. So after I transform the pixel position with the lights view and projection matrix, Im not sure exactly what you mean by that.

Also here are my shader functions to compute my shadow map, if you want to look at it really quick and make sure I am going in the right direction. I have run these functions in nVidia's FX Composer, and it looks good so far.

 VS_OUT_SHADOWMAP vsShadowMap(VS_IN_SHADOWMAP vIn) { VS_OUT_SHADOWMAP vOut = (VS_OUT_SHADOWMAP)0; vOut.vPos = mul(vIn.vPos, lightWorldViewProj); vOut.vDepth.xy = vOut.vPos.zw; return vOut; } PS_OUT_SHADOWMAP psShadowMap(VS_OUT_SHADOWMAP vIn) { PS_OUT_SHADOWMAP pOut = (PS_OUT_SHADOWMAP)0; //Depth is z / w pOut.color = (vIn.vDepth.x / vIn.vDepth.y); return pOut; } 

##### Share on other sites
I mean something like this:
 float4 shadowPosition = mul(float4(psIn.PositionWS, 1.0f), lightViewProj); shadowPosition.xyz /= shadowPosition.w; float2 shadowUV = shadowPosition.xy * float2(0.5f, -0.5f) + 0.5f; float shadowMapValue = tex2D(ShadowMap, shadowUV).x; float shadowOcclusion = shadowMapValue >= shadowPosition.x; 

##### Share on other sites
So first you are transforming the pixel position, then secondly, you divide the vector position by the w member. What is the w member supposed to represent, and when you divide by w, what is it accomplishing? Also when you get the shadowUV, why are you multiplying and adding 0.5? Im sorry if I am being a bother, but I would like to understand 100% what it is I am looking at, not just copy paste and call it a day =p.

##### Share on other sites

So first you are transforming the pixel position, then secondly, you divide the vector position by the w member. What is the w member supposed to represent, and when you divide by w, what is it accomplishing? Also when you get the shadowUV, why are you multiplying and adding 0.5? Im sorry if I am being a bother, but I would like to understand 100% what it is I am looking at, not just copy paste and call it a day =p.

This is due to the perspective projection matrix (take a look for example at this nice explanation series)

Basically, perspective projection is leading to this formula (in what is called normalized device coordinates - NDC). The following explanation is from "Introduction to 3D Game Programming with DirectX10" by Frank D.Luna:
[source]
x' = x / (r*z*tan(a/2))
y' = y / (z*tan(a/2))
Formula (1)
[/source]
where r is the screen ratio and "a" is the vertical field of view angle.
where -1 <= x' <= 1, -1 <= y' <= 1, nearPlane <= z <= farPlane
But in order to express the "divide by z" with a matrix transformation, we need to do it in two steps. First step is to build a perspective matrix that is removing the "divide by z" while keeping the z information:
[source]

[ x / (r*tan(a/2)) 0 0 0]
[x,y,z,1] [ 0 y / (tan(a/2)) 0 0]
[ 0 0 A 1]
[ 0 0 B 0]
[/source]
Where A and B are constant related to the Near/Far plane. If we go down the formula you will get that
[source]
A = farPlane / (farPlane - nearPlane)
B = - A * nearPlane = - nearPlane * farPlane / (farPlane - nearPlane)
[/source]
So at the output of the vertex shader, we get something like this:
[source]
vertexOutput = [ x / (r*tan(a/2)), y / (tan(a/2)), A * z + B, z]
[/source]
Then after, from the pixel shader, dividing by w will permit to recover the "divide by z" from formula (1). Note that we cannot divide this factor in the vertex shader because the value must be interpolated by the rasterizer stage (and interpolating 1/z value would lead to incorrect values), so we need to delay the "divide by z" operation to the pixel shader.
[source]
finalVertex = vertexOutput / vertexOutput.w = [x', y', z', 1] = [ x / (r * z * tan(a/2)), y / (z * tan(a/2)), A + B / z, 1]

z' = farPlane / (farPlane - nearPlane) - nearPlane * farPlane / (z * (farPlane - nearPlane))
when z = nearPlane, z' = 0
when z = farPlane, z' = 1
[/source]
The final vertex is said to be in homogeneous clip space (or projection space), where -1 <= x' <= 1, -1 <= y' <= 1, 0 <= z' <= 1 (For DirectX, OpenGL is requiring -1 <= z' <= 1)

##### Share on other sites
^^^ what xoofx said.

As for the multiplying and adding 0.5...like xoofx explained, after perspective divide your XY coordinates are in NDC space. This means they have the range [-1, 1], where X = -1 is the left side of the render target, x = 1 is the right, y = -1 is the bottom, and y = 1 is the top. This is different from UV coordinates, where U = 0 is the left, U = 1 is the right, V = 0 is the top, and V = 1 is the bottom. So the first step is to multiply by float2(0.5, -0.5). This moves XY to the range [-0.5, 0.5] and also flips the Y coordinate so that it matches the way V works. Then by adding 0.5, you move it to the range [0, 1]. At this point you're now ready to use XY as UV coordinates for sampling the shadow map.

• 9
• 13
• 41
• 15
• 13