View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# How does shadow mapping work?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

6 replies to this topic

### #1sobeit  Members

Posted 21 April 2013 - 02:25 AM

hi all,

I'm trying to implement the shadow mapping, but I can't understanding the exact theory behind it. My problem lies in the second pass where I need to compare the depth stored in the shadow map with the current pixel depth.

Assume now in the second render pass I have the depth of one pixel, which value in shadow map should I compare to? how can I get that value?

I read some tutorial online, which says I need to calculate the distance between light source and the pixel in the second pass, but I don't know how.

Can someone elaborate the above part of the algorithm to me? Thanks in advance.

### #2tool_2046  Members

Posted 21 April 2013 - 04:04 PM

Sure, I'll give it a shot. The idea is you need to be comparing depth values in the same space. When you generated your shadow map, you used a certain model, view, and projection matrix for a piece of geometry. When you render the scene from the camera's point of view, you have a model, view and projection matrix as well, but this results in a value that's in a different space. When you render from the point of view of the camera (the second pass), you must also transform by the lights model, view and projection matrix. This will give you a value that's in the range [-1, 1] for X, Y and Z. Texture coordinates are in the range [0, 1] though, so you must scale the value by .5 (note for OpenGL you must also negate the Y texture coordinate). After doing this you'll have texture coordinates to look into the shadow map and see what depth value the light "saw". You also have your pixel that you transformed into the same space, so you know what its depth value is. If the depth value you have for the pixel you are drawing is less than the depth value in the shadow map, that means the pixel you are drawing is closer to the light than what the light saw. In other words, it's not occluded and thus should not be in shadow.

Edited by tool_2046, 21 April 2013 - 04:08 PM.

### #3sobeit  Members

Posted 21 April 2013 - 06:45 PM

Sure, I'll give it a shot. The idea is you need to be comparing depth values in the same space. When you generated your shadow map, you used a certain model, view, and projection matrix for a piece of geometry. When you render the scene from the camera's point of view, you have a model, view and projection matrix as well, but this results in a value that's in a different space. When you render from the point of view of the camera (the second pass), you must also transform by the lights model, view and projection matrix. This will give you a value that's in the range [-1, 1] for X, Y and Z. Texture coordinates are in the range [0, 1] though, so you must scale the value by .5 (note for OpenGL you must also negate the Y texture coordinate). After doing this you'll have texture coordinates to look into the shadow map and see what depth value the light "saw". You also have your pixel that you transformed into the same space, so you know what its depth value is. If the depth value you have for the pixel you are drawing is less than the depth value in the shadow map, that means the pixel you are drawing is closer to the light than what the light saw. In other words, it's not occluded and thus should not be in shadow.

thanks very much for your help here. but I still have some confusions. so far, my problem is in the second pass where I transform object vertices from both the view of the camera and the view of the light. But I don't know how to interpolate vertices of light view for every fragment(i'm using my own software graphics pipeline).

### #4tool_2046  Members

Posted 21 April 2013 - 07:50 PM

### #5phil_t  Members

Posted 21 April 2013 - 11:50 PM

Yeah, the most straightforward way to do this is to pass your world position (which of course you have access to in the vertex shader) to the pixel shader via a TEXCOORD semantic.

### #6sobeit  Members

Posted 22 April 2013 - 12:50 AM

Yeah, the most straightforward way to do this is to pass your world position (which of course you have access to in the vertex shader) to the pixel shader via a TEXCOORD semantic.

but the world position in vertex shader is vertex-based, how do I know what's the world position for every fragment(or pixel)?

and what space is the xy axes of shadow map in? I think it's in window space which are all integers. but view * Projection matrix only take us to clip space right?

Thanks.

Edited by sobeit, 22 April 2013 - 12:56 AM.

### #7phil_t  Members

Posted 22 April 2013 - 04:52 AM

but the world position in vertex shader is vertex-based, how do I know what's the world position for every fragment(or pixel)?

Just like I described: you pass it out of the vertex shader, and then you get the interpolated value in the pixel shader.

Edited by phil_t, 22 April 2013 - 04:53 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.